Re: [HACKERS] Auto Vacuum Daemon (again...)

2002-11-28 Thread Matthew T. O'Connor
On Thu, 2002-11-28 at 01:58, Shridhar Daithankar wrote:
 There are differences in approach here. The reason I prefer polling rather than 
 signalig is IMO vacuum should always be a low priority activity and as such it 
 does not deserve a signalling overhead.
 
 A simpler way of integrating would be writing a C trigger on pg_statistics 
 table(forgot the exact name). For every insert/update watch the value and 
 trigger the vacuum daemon from a separate thread. (Assuming that you can create 
 a trigger on view)
 
 But Tom has earlier pointed out that even a couple of lines of trigger on such 
 a table/view would be a huge performance hit in general..
 
 I would still prefer polling. It would serve the need for foreseeable future..

Well this is a debate that can probably only be solved after doing some
legwork, but I was envisioning something that just monitored the same
messages that get send to the stats collector, I would think that would
be pretty lightweight, or even perhaps extending the stats collector to
also fire off the vacuum processes since it already has all the
information we are polling for.

 The reason I brought up issue of multiple processes/connection is starvation of 
 a DB.
 
 Say there are two DBs which are seriously hammered. Now if a DB starts 
 vacuuming and takes long, another DB just keeps waiting for his turn for 
 vacuuming and by the time vacuum is triggered, it might already have suffered 
 some performance hit.
 
 Of course these things are largely context dependent and admin should be abe to 
 make better choice but the app. should be able to handle the worst situation..

agreed

 The other way round is make AVD vacuum only one database. DBA can launch 
 multiple instances of AVD for each database as he sees fit. That would be much 
 simpler..

interesting thought.  I think this boils down to how many knobs do we
need to put on this system. It might make sense to say allow upto X
concurrent vacuums, a 4 processor system might handle 4 concurrent
vacuums very well.  I understand what you are saying about starvation, I
was erring on the conservative side by only allowing one vacuum at a
time (also simplicity of code :-) Where the worst case scenario is that
you suffer some performance hit but the hit would be finite since
vacuum will get to it fairly soon.

 Please send me the code offlist. I would go thr. it and get back to you by 
 early next week(bit busy, right now)

already sent.


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] nested transactions

2002-11-28 Thread Hans-Jürgen Schönig
Is there going to be a way to use transactions inside transactions of 
transactions?
In other words:

   BEGIN;
   BEGIN;
   BEGIN;
   BEGIN;

   COMMIT;
   COMMIT;
   COMMIT;
   COMMIT;

Is there a way to have some sort of recursive solution with every 
transaction but the first one being a child transaction?
Is there a way to implement that without too much extra effort?
I just curious how that could be done.

   Hans




---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] InitDB Failure - PostgreSQL 7.2, RedHat 7.3, compile from source

2002-11-28 Thread Tom Lane
[EMAIL PROTECTED] writes:
 Due to the 32 character limit on column/table names, we needed to recompile 
PostgreSQL from the source with updated settings.  It compiles fine, but on running 
initdb, we get the following output:

Without bothering to examine the details, I'll bet you didn't do a full
recompile.  You need make clean then make all after changing
settings such as this one.

 However ideally we'd like to be able to go beyond 64.

Note that the reason 7.3 defaults to 64, and not more, is that we
measured noticeable performance degradation at 128 and beyond.

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] next value expression

2002-11-28 Thread Manfred Koizar
On 27 Nov 2002 11:51:13 -0500, Neil Conway [EMAIL PROTECTED] wrote:
Somewhat -- SQL2003 defines sequence generators that are pretty much
identical in functionality to PostgreSQL's sequences, although the
syntax is a bit different. I submitted a patch for 7.4 that adjusts the
CREATE SEQUENCE grammar to match SQL2003's CREATE SEQUENCE a little more
closely, but there's a bunch more work that can be done, if we want to
be fully SQL-compliant.

Neil, I'm not advocating a change.  As long as Postgres sequences
don't look like SQL2003 sequence generators there is no problem, if
they behave differently.  OTOH if we have standard syntax, I'd prefer
to have standard semantics, too.  Maybe we can have classic Postgres
syntax (nextval('...')) with classic Postgres behaviour and SQL2003
syntax (NEXT VALUE FOR ...) with SQL2003 behaviour side by side?

CURRENT_TIMESTAMP is another issue, because it looks like standard
SQL, but ...

Servus
 Manfred

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[HACKERS] Alter table .. Add primary key

2002-11-28 Thread Rod Taylor
When doing an alter table .. add primary key operation on columns which
are not marked as null, would it be appropriate to mark the primary key
columns not null?

This follows with create table auto-marking columns null for primary
keys.


rbt=# \d ar
 Table public.ar
 Column | Type | Modifiers 
+--+---
 col| r| 

rbt=# alter table ar add primary key (col);
ERROR:  Existing attribute col cannot be a PRIMARY KEY because it is
not marked NOT NULL

-- 
Rod Taylor [EMAIL PROTECTED]


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] Is current_user a function ?

2002-11-28 Thread Rod Taylor
Force the system to use it as a function.

select current_user();

On Thu, 2002-11-28 at 11:31, Masaru Sugawara wrote:
 Hi,
 
 As for some current_*** functions, select current_user; seems to
 work, but  select current_user(); doesn't . Though current_user is
 defined as one of functions, why does such an error occur ?
 
 renew=# select current_user();
 ERROR:   parser: parse error at or near ( at character 20
 
 
 Regards,
 Masaru Sugawara
 
 
 renew=# \df
   List of functions
Result data type  |   Schema   | Name  | Argument data types  
   
 -++---+---
  ...
  name| pg_catalog | current_database  | 
  name| pg_catalog | current_schema| 
  name[]  | pg_catalog | current_schemas   | boolean
  text| pg_catalog | current_setting   | text
  name| pg_catalog | current_user  | 
  ...
 
 
 renew=# select current_user();
 ERROR:   parser: parse error at or near ( at character 20
 
 renew=# select current_database();
  current_database 
 --
  renew
 (1 row)
 
 renew=# select current_schema();
  current_schema 
 
  public
 (1 row)
 
 renew=# select current_schema(true);
current_schemas   
 -
  {pg_catalog,postgres,public}
 (1 row)
 
 renew=# select current_schema(false);
current_schemas   
 -
  {postgres,public}
 (1 row)
 
 renew=# select current_setting('search_path');
  current_setting 
 -
  $user,public
 (1 row)
 
 
 
 
 
 
 
 ---(end of broadcast)---
 TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
-- 
Rod Taylor [EMAIL PROTECTED]


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] PostGres and WIN32, a plea!

2002-11-28 Thread Merlin Moncure

 Hmm, ever tried using a large multiuser database such as a finance
 system using a Foxpro database? Network managers have been known to
 murder for less... :-)

Hmm, I have, and you could imagine the result :)
It was a small system, really and everything was fine until I added my 10th
user.  Then my data left me like the parting of the Red Sea :).

Building a database system on lousy tehnology, only to rewrite it is
something all database admins have to go through.  I think its kind of like
coming of age.  On the unix side of things, you have mysql catching people
the same way.

FP did have a very nice query optimizer.  Also, FP views optimized the where
condition through the query, and have for quite some time (does PG do this
yet?).  I think the FP team was really on to something, till M hamstrung the
project.

FP also had the ability to write user defined functions into the query,
something I thought I would have to give up forever, until I stumbled across
PG (from the mysql docs, go figure!)


Merlin



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [PERFORM] [HACKERS] Realtime VACUUM, was: performance of insert/delete/update

2002-11-28 Thread Jim Beckstrom
Just for the humor of it, as well as to confirm Nick's perspective, 
years ago on our inhouse developed Burroughs mainframe dbms, we had a 
process called garbage collect.

Nicolai Tufar wrote:

I always wandered if VACUUM is the right name for the porcess. Now, when
PostgreSQL
is actively challenging in Enterprise space, it might be a good idea to give
it a more
enterprise-like name. Try to think how it is looking for an outside person
to see
us, database professionals hold lenghty discussions about the ways we
vacuum a database. Why should you need to vacuum a database? Is it
dirty? In my personal opinion, something like space reclaiming daemon,
free-list organizer, tuple recyle job or segment coalesce process
would
sound more business-like .

Regards,
Nick


- Original Message -
From: Bruce Momjian [EMAIL PROTECTED]
To: Curtis Faith [EMAIL PROTECTED]
Cc: Tom Lane [EMAIL PROTECTED]; Ron Johnson [EMAIL PROTECTED];
PgSQL Performance ML [EMAIL PROTECTED];
[EMAIL PROTECTED]
Sent: Tuesday, November 26, 2002 9:09 PM
Subject: Re: [PERFORM] [HACKERS] Realtime VACUUM, was: performance of
insert/delete/update


 

Good ideas.  I think the master solution is to hook the statistics
daemon information into an automatic vacuum that could _know_ which
tables need attention.

--
   

-
 

Curtis Faith wrote:
   

tom lane wrote:
 

Sure, it's just shuffling the housekeeping work from one place to
another.  The thing that I like about Postgres' approach is that we
put the housekeeping in a background task (VACUUM) rather than in the
critical path of foreground transaction commit.
   

Thinking with my marketing hat on, MVCC would be a much bigger win if
 

VACUUM
 

was not required (or was done automagically). The need for periodic
 

VACUUM
 

just gives ammunition to the PostgreSQL opponents who can claim we are
deferring work but that it amounts to the same thing.

A fully automatic background VACUUM will significantly reduce but will
 

not
 

eliminate this perceived weakness.

However, it always seemed to me there should be some way to reuse the
 

space
 

more dynamically and quickly than a background VACUUM thereby reducing
 

the
 

percentage of tuples that are expired in heavy update cases. If only a
 

very
 

tiny number of tuples on the disk are expired this will reduce the
 

aggregate
 

performance/space penalty of MVCC into insignificance for the majority
 

of
 

uses.

Couldn't we reuse tuple and index space as soon as there are no
 

transactions
 

that depend on the old tuple or index values. I have imagined that this
 

was
 

always part of the long-term master plan.

Couldn't we keep a list of dead tuples in shared memory and look in the
 

list
 

first when deciding where to place new values for inserts or updates so
 

we
 

don't have to rely on VACUUM (even a background one)? If there are
 

expired
 

tuple slots in the list these would be used before allocating a new slot
 

from
 

the tuple heap.

The only issue is determining the lowest transaction ID for in-process
transactions which seems relatively easy to do (if it's not already done
somewhere).

In the normal shutdown and startup case, a tuple VACUUM could be
 

performed
 

automatically. This would normally be very fast since there would not be
 

many
 

tuples in the list.

Index slots would be handled differently since these cannot be
 

substituted
 

one for another. However, these could be recovered as part of every
 

index
 

page update. Pages would be scanned before being written and any expired
slots that had transaction ID's lower than the lowest active slot would
 

be
 

removed. This could be done for non-leaf pages as well and would result
 

in
 

only reorganizing a page that is already going to be written thereby not
adding much to the overall work.

I don't think that internal pages that contain pointers to values in
 

nodes
 

further down the tree that are no longer in the leaf nodes because of
 

this
 

partial expired entry elimination will cause a problem since searches
 

and
 

scans will still work fine.

Does VACUUM do something that could not be handled in this realtime
 

manner?
 

- Curtis



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster

 

--
 Bruce Momjian|  http://candle.pha.pa.us
 [EMAIL PROTECTED]   |  (610) 359-1001
 +  If your life is a hard drive, |  13 Roberts Road
 +  Christ can be your backup.|  Newtown Square, Pennsylvania
   

19073
 

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])

   



---(end of 

Re: [HACKERS] [PERFORM] Realtime VACUUM, was: performance of insert/delete/update

2002-11-28 Thread Merlin Moncure
How about OPTIMIZE?

eg. optimize customers

instead of analyze, could be paired with agressive

so, OPTIMIZE AGREESSIVE

very much a glass half empty, half full type thing.  vacuum is not a
problem, its a solution.

Merlin


Curtis Faith [EMAIL PROTECTED] wrote in message
[EMAIL PROTECTED]">news:[EMAIL PROTECTED]...
 tom lane wrote:
  Sure, it's just shuffling the housekeeping work from one place to
  another.  The thing that I like about Postgres' approach is that we
  put the housekeeping in a background task (VACUUM) rather than in the
  critical path of foreground transaction commit.

 Thinking with my marketing hat on, MVCC would be a much bigger win if
VACUUM
 was not required (or was done automagically). The need for periodic VACUUM
 just gives ammunition to the PostgreSQL opponents who can claim we are
 deferring work but that it amounts to the same thing.

 A fully automatic background VACUUM will significantly reduce but will not
 eliminate this perceived weakness.

 However, it always seemed to me there should be some way to reuse the
space
 more dynamically and quickly than a background VACUUM thereby reducing the
 percentage of tuples that are expired in heavy update cases. If only a
very
 tiny number of tuples on the disk are expired this will reduce the
aggregate
 performance/space penalty of MVCC into insignificance for the majority of
 uses.

 Couldn't we reuse tuple and index space as soon as there are no
transactions
 that depend on the old tuple or index values. I have imagined that this
was
 always part of the long-term master plan.

 Couldn't we keep a list of dead tuples in shared memory and look in the
list
 first when deciding where to place new values for inserts or updates so we
 don't have to rely on VACUUM (even a background one)? If there are expired
 tuple slots in the list these would be used before allocating a new slot
from
 the tuple heap.

 The only issue is determining the lowest transaction ID for in-process
 transactions which seems relatively easy to do (if it's not already done
 somewhere).

 In the normal shutdown and startup case, a tuple VACUUM could be performed
 automatically. This would normally be very fast since there would not be
many
 tuples in the list.

 Index slots would be handled differently since these cannot be substituted
 one for another. However, these could be recovered as part of every index
 page update. Pages would be scanned before being written and any expired
 slots that had transaction ID's lower than the lowest active slot would be
 removed. This could be done for non-leaf pages as well and would result in
 only reorganizing a page that is already going to be written thereby not
 adding much to the overall work.

 I don't think that internal pages that contain pointers to values in nodes
 further down the tree that are no longer in the leaf nodes because of this
 partial expired entry elimination will cause a problem since searches and
 scans will still work fine.

 Does VACUUM do something that could not be handled in this realtime
manner?

 - Curtis



 ---(end of broadcast)---
 TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



[HACKERS] System Tables

2002-11-28 Thread Steve Jackson
Hi

Has anyone of you a good pointer to a description of where in the system 
tables I may find what informations? I try to code a generic procedure 
which gets information (like field type, field length, foreign keys...) 
about tables and fields from a system table.

Thank you for your help in advance

sj


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[HACKERS] record object type

2002-11-28 Thread Steve Jackson
Hi all

Is there any deeper description of the record type in plpgsql?

I try to iterate through whole rows and fields, but there is nearly nothing 
written down, or at least I am finding nearly nothing.

Any help?

Thanks,
sj


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[HACKERS] Hierarchical queries a la Oracle. for ver 7.3rc1

2002-11-28 Thread Evgen Potemkin
Hi there!

Patch itself posted to pgsql-patches.

This is a new version of patch i've posted.
This for PG version 7.3rc1.

Changed syntax, now it's more closer to Oracle's and allows operator other
than '='.
Removed Const/Var trick, now it's a new FakeVar node used, as a side effect
it's not need initdb now.
Added little regression test.
Added more comments on code.
A bit extended README.hier .

working on SQL99 version.

regards,

---
.evgen


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] nested transactions

2002-11-28 Thread Manfred Koizar
On Wed, 27 Nov 2002 22:47:33 -0500 (EST), Bruce Momjian
[EMAIL PROTECTED] wrote:
The interesting issue is that if we could set the commit/abort bits all
at the same time, we could have the parent/child dependency local to the
backend --- other backends don't need to know the parent, only the
status of the (subtransaction's) xid, and they need to see all those
xid's committed at the same time.

You mean the commit/abort bit in the tuple headers?  Yes, this would
be interesting, but I see no way how this could be done.  If it could,
there would be no need for pg_clog.

Reading your paragraph above one more time I think you mean the bits
in pg_clog:  Each subtransaction gets its own xid.  On ROLLBACK the
abort bits of the aborted (sub)transaction and all its children are
set in pg_clog immediately.  This operation does not have to be
atomic.  On subtransaction COMMIT nothing happens to pg_clog, the
status is only changed locally, the subtransaction still looks in
progress to other backends.  Only when the main transaction commits,
we set the commit bits of the main transaction and all its non-aborted
children in pg_clog.  This action has to be atomic.  Right?

AFAICS the problem lies in updating several pg_clog bits at once.  How
can this be done without holding a potentially long lasting lock?

You could store the backend slot id in pg_clog rather than the parent
xid and look up the status of the outer xid for that backend slot.  That
would allow you to use 2 bytes, with a max of 16k backends.  The problem
is that on a crash, the pg_clog points to invalid slots --- it would
probably have to be cleaned up on startup.

Again I would try to keep pg_clog compact and store the backend slots
in another file, thus not slowing down instances where subtransactions
are nor used.  Apart from this minor detail I don't see, how this is
supposed to work.  Could you elaborate?

But still, you have an interesting idea of just setting the bit to be I
am a child.

The idea was to set subtransaction bits in the tuple header.  Here is
yet another different idea:  Let the currently unused fourth state in
pg_clog indicate a committed subtransaction.  There are two bits per
transaction, commit and abort, with the following meaning:

 a  c
 0  0  transaction in progress, the owning backend knows whether it is
   a main- or a sub-transaction, other backends don't care
 1  0  aborted, nobody cares whether main- or sub-transaction
 0  1  committed main-transaction (*)
 1  1  committed sub-transaction, have to look for parent in
   pg_subtrans

If we allow the 1/1 state to be replaced with 0/1 or 1/0 (on the fly
as a side effect of a visibility check, or by vacuum, or by
COMMIT/ROLLBACK), this could save a lot of parent lookups without
having to touch the xids in the tuple headers.

So (*) should read: committed main-transaction or committed
sub-transaction having a committed parent.

The trick is allowing backends to figure out who's child
you are.  We could store this somehow in shared memory, but that is
finite and there can be lots of xid's for a backend using
subtransactions.

The subtrans dependencies have to be visible to all backends.  Store
them to disk just like pg_clog.  In older proposals I spoke of a
pg_subtrans table containing (parent, child) pairs.  This was only
meant as a concept, not as a real SQL table subject to MVCC.  An
efficient(?) implementation could be an array of parent xids, indexed
by child xid.  Most of it can be stolen from the clog code.

One more argument for pg_subtrans being visible to all backends:  If
an UPDATE is about to change a tuple touched by another active
transaction, it waits for the other transaction to commit or abort.
We must always wait for the main transaction, not the subtrans.

I still think there must be a clean way,
I hope so ...

 but I haven't figured it out yet.
Are we getting nearer?

Servus
 Manfred

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] Auto Vacuum Daemon (again...)

2002-11-28 Thread Tom Lane
Matthew T. O'Connor [EMAIL PROTECTED] writes:
 interesting thought.  I think this boils down to how many knobs do we
 need to put on this system. It might make sense to say allow upto X
 concurrent vacuums, a 4 processor system might handle 4 concurrent
 vacuums very well.

This is almost certainly a bad idea.  vacuum is not very
processor-intensive, but it is disk-intensive.  Multiple vacuums running
at once will suck more disk bandwidth than is appropriate for a
background operation, no matter how sexy your CPU is.  I can't see
any reason to allow more than one auto-scheduled vacuum at a time.

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] nested transactions

2002-11-28 Thread Bruce Momjian
Hans-Jürgen Schönig wrote:
 Is there going to be a way to use transactions inside transactions of 
 transactions?
 In other words:
 
 BEGIN;
 BEGIN;
 BEGIN;
 BEGIN;
 
 COMMIT;
 COMMIT;
 COMMIT;
 COMMIT;
 
 Is there a way to have some sort of recursive solution with every 
 transaction but the first one being a child transaction?
 Is there a way to implement that without too much extra effort?
 I just curious how that could be done.

Sure, nesting will be unlimited.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[HACKERS] System Tables

2002-11-28 Thread Lee Kindness
See:

 http://www.us.postgresql.org/users-lounge/docs/7.2/postgres/catalogs.html

for PostgreSQL 7.2.x, for 7.3 see:

 http://developer.postgresql.org/docs/postgres/catalogs.html

Lee.

Steve Jackson writes:
  Hi
  
  Has anyone of you a good pointer to a description of where in the system 
  tables I may find what informations? I try to code a generic procedure 
  which gets information (like field type, field length, foreign keys...) 
  about tables and fields from a system table.
  
  Thank you for your help in advance

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] System Tables

2002-11-28 Thread Neil Conway
On Thu, 2002-11-28 at 02:32, Steve Jackson wrote:
 Has anyone of you a good pointer to a description of where in the system 
 tables I may find what informations?

The PostgreSQL Developer's Guide has some information:

http://developer.postgresql.org/docs/postgres/catalogs.html

But IIRC it might be a little out of date. Also, starting psql with -E
and taking a look at the queries it uses to generate data is often
useful.

Cheers,

Neil

-- 
Neil Conway [EMAIL PROTECTED] || PGP Key ID: DB3C29FC




---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] System Tables

2002-11-28 Thread Andrew J. Kopciuch
On Thursday 28 November 2002 00:32, Steve Jackson wrote:
 Hi

 Has anyone of you a good pointer to a description of where in the system
 tables I may find what informations? I try to code a generic procedure
 which gets information (like field type, field length, foreign keys...)
 about tables and fields from a system table.


Read the man page for psql.  
Check out the section for PSQL META_COMMANDS.

Sounds like everything you need is in there.

Andy


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] record object type

2002-11-28 Thread Neil Conway
On Thu, 2002-11-28 at 11:12, Steve Jackson wrote:
 Is there any deeper description of the record type in plpgsql?

Other than in the PL/PgSQL documentation, you mean? I dunno, the code, I
guess :-)

What specific information are you looking for?

Cheers,

Neil
-- 
Neil Conway [EMAIL PROTECTED] || PGP Key ID: DB3C29FC




---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] [GENERAL] Request from eWeek for 7.3 comments

2002-11-28 Thread Joshua D. Drake
Hello,

   Command Prompt, Inc. looks forward to the open source release of 
PostgreSQL 7.3 as we are testing our commercial version of Mammoth 
PostgreSQL 7.3. The updated
release of the core PostgreSQL code base has added many of the much 
needed, and left behind feature such as drop column. The new features, 
coupled with the additional
features added by Mammoth PostgreSQL such as pre-forked connections, 
stream level compression and Mammoth LXP (the PostgreSQL Application 
server), PostgreSQL
is set to take center stage from products such as MySQL for delivering 
enterprise class applications to the database market.

Sincerely,

Joshua D. Drake
Co-Founder Command Prompt, Inc.
Co-Author Practical PostgreSQL

Bruce Momjian wrote:

I just spoke with Lisa Vaas from eWeek.  She is writing an article on
the upcoming PostgreSQL 7.3 release.  (The release of 7.3 is scheduled
for tomorrow.)

She would like comments from users about the upcoming 7.3 features,
listed at:

	http://developer.postgresql.org/docs/postgres/release.html#RELEASE-7-3

If you are interested, please reply to this email with any comments you
might have.  I have directed replies to her email address.  She would
like comments within the next few hours, until midnight EST.

 



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] Hirarchical queries a la Oracle. Patch.

2002-11-28 Thread Evgen Potemkin
thanks, it's VERY helpful.
understanding SQL99 draft is a bit more difficult than i thought :)

regards,

---
.evgen

On 27 Nov 2002, Hannu Krosing wrote:

 I attach a railroad diagram of SQL99 WITH RECURSIVE and a diff against
 mid-summer gram.y which implements half of SQL99 _syntax_ (just the WITH
 {RECURSIVE} part, SEARCH (tree search order order) and CYCLE (recursion
 control) clauses are missing).



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] nested transactions

2002-11-28 Thread Bruce Momjian
Manfred Koizar wrote:
 On Wed, 27 Nov 2002 22:47:33 -0500 (EST), Bruce Momjian
 [EMAIL PROTECTED] wrote:
 The interesting issue is that if we could set the commit/abort bits all
 at the same time, we could have the parent/child dependency local to the
 backend --- other backends don't need to know the parent, only the
 status of the (subtransaction's) xid, and they need to see all those
 xid's committed at the same time.
 
 You mean the commit/abort bit in the tuple headers?  Yes, this would
 be interesting, but I see no way how this could be done.  If it could,
 there would be no need for pg_clog.
 
 Reading your paragraph above one more time I think you mean the bits
 in pg_clog:  Each subtransaction gets its own xid.  On ROLLBACK the

Right.

 abort bits of the aborted (sub)transaction and all its children are
 set in pg_clog immediately.  This operation does not have to be
 atomic.  On subtransaction COMMIT nothing happens to pg_clog, the

Right, going from RUNNING to ABORTED doesn't have to be atomic because
both tuples are invisible.

 status is only changed locally, the subtransaction still looks in
 progress to other backends.  Only when the main transaction commits,
 we set the commit bits of the main transaction and all its non-aborted
 children in pg_clog.  This action has to be atomic.  Right?

Right.  We can't have some backends looking at part of the transaction
as committed while at the same time other backends see the transaction
as in process.

 AFAICS the problem lies in updating several pg_clog bits at once.  How
 can this be done without holding a potentially long lasting lock?

Yes, locking is one possible solution, but no one likes that.  One hack
lock idea would be to create a subtransaction-only lock, so if you see
the special 4-th xact state (about to be committed as part of a
subtransaction) you have to wait on that lock (held by the backend
twiddling the xact bits), then look again.  That basically would
serialize all the bit-twiddling for subtransactions.  I am sure I am
going to get a yuck from the audience on that one, but I am not sure
how long that bit twiddling could take.  Does xact twiddle every cause
I/O?  I think it could, which would be a pretty big performance problem.
It would serialize the subtransaction commits _and_ block anyone trying
to get the status of those subtransactions.  We would not use the the
4th xid status during the transaction, only while we were twiddling the
bits on commit.

 You could store the backend slot id in pg_clog rather than the parent
 xid and look up the status of the outer xid for that backend slot.  That
 would allow you to use 2 bytes, with a max of 16k backends.  The problem
 is that on a crash, the pg_clog points to invalid slots --- it would
 probably have to be cleaned up on startup.
 
 Again I would try to keep pg_clog compact and store the backend slots
 in another file, thus not slowing down instances where subtransactions
 are nor used.  Apart from this minor detail I don't see, how this is
 supposed to work.  Could you elaborate?

The trick is that when that 4th status is set, backends looking up the
status all need to point to a central location that can be set for all
of them at once, hence the original idea of putting the parent xid in
the clog file.  We don't _need_ to do that, but we do need a way to
_point_ to a central location where the status can be looked up.

 But still, you have an interesting idea of just setting the bit to be I
 am a child.
 
 The idea was to set subtransaction bits in the tuple header.  Here is
 yet another different idea:  Let the currently unused fourth state in
 pg_clog indicate a committed subtransaction.  There are two bits per
 transaction, commit and abort, with the following meaning:
 
  a  c
  0  0  transaction in progress, the owning backend knows whether it is
a main- or a sub-transaction, other backends don't care
  1  0  aborted, nobody cares whether main- or sub-transaction
  0  1  committed main-transaction (*)
  1  1  committed sub-transaction, have to look for parent in
pg_subtrans
 
 If we allow the 1/1 state to be replaced with 0/1 or 1/0 (on the fly
 as a side effect of a visibility check, or by vacuum, or by
 COMMIT/ROLLBACK), this could save a lot of parent lookups without
 having to touch the xids in the tuple headers.

Yes, you could do that, but we can easily just set the clog bits
atomically, and it will not be needed --- the tuple bits really don't
help us, I think.

 So (*) should read: committed main-transaction or committed
 sub-transaction having a committed parent.
 
 The trick is allowing backends to figure out who's child
 you are.  We could store this somehow in shared memory, but that is
 finite and there can be lots of xid's for a backend using
 subtransactions.
 
 The subtrans dependencies have to be visible to all backends.  Store
 them to disk just like pg_clog.  In older proposals I spoke of a
 pg_subtrans table containing (parent, 

Re: [HACKERS] Is current_user a function ?

2002-11-28 Thread Tom Lane
Masaru Sugawara [EMAIL PROTECTED] writes:
 As for some current_*** functions, select current_user; seems to
 work, but  select current_user(); doesn't .

Complain to the SQL spec authors --- they mandated this peculiar keyword
syntax for what is really a function call.

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] System Tables

2002-11-28 Thread Tom Lane
Neil Conway [EMAIL PROTECTED] writes:
 Has anyone of you a good pointer to a description of where in the system 
 tables I may find what informations?

 The PostgreSQL Developer's Guide has some information:
 http://developer.postgresql.org/docs/postgres/catalogs.html

 But IIRC it might be a little out of date.

One would hope not.  It's taken us awhile to finish the work of
documenting every system catalog, but as of 7.3 they are all in there.
I intend to crack the whip as much as necessary to ensure that these
docs get updated whenever someone changes the catalogs ;-)

Actually, pointing people to the developer docs is more likely to create
the reverse problem: what they read there may be too new for the release
they are actually using.

regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[HACKERS] doc/src/Makefile annoyance

2002-11-28 Thread Tom Lane
Anyone know why the install target in doc/src/Makefile is coded like
this:

install:
$(MAKE) all
(mv -f *.$(ZIPSUFFIX) ..)

and not the more conventional

install: all
mv -f *.$(ZIPSUFFIX) ..

or perhaps safer,

install: all
mv -f $(TARGETS) ..

I just typed make all, waited a good long while, typed make install,
and was seriously annoyed to watch it make clean and start the docs
build *again*.  This behavior is broken IMHO.

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] Boolean casting in 7.3 - changed?

2002-11-28 Thread Ian Barwick
On Thursday 28 November 2002 00:18, Tom Lane wrote:
 Peter Eisentraut [EMAIL PROTECTED] writes:
  Ian Barwick writes:
  Casting integers to boolean (for example, 0::bool) is no longer
  allowed, use '0'::bool instead.
 
  This advice would probably only cause more confusion, because we are now
  moving into the direction that character strings are no longer acceptable
  as numeric data.

 Yes, phrased that way it's just misleading.

OK, what I am trying to say is something like:

If you are upgrading an application to PostgreSQL 7.3
and are having problems with boolean casts which look like
0::bool or 1::bool, which previously worked without any problem,
(although not explicitly supported) you will need to rewrite them
to use the values listed here:

http://www.postgresql.org/idocs/index.php?datatype-boolean.html .

Doing things like '0'::bool will also work but is not recommended.

because that's a problem I came across but found no mention of,
so I thought I would point it out for the benefit of anyone else
who might encounter it ;-)

For reference, the reason why I was casting integer-like literals
to boolean in the first place is: 
 - Perl application used to run on a combination of MySQL and Oracle;
 - Perl doesn't have a boolean data type, but the values 0 and 1
   in scalar context do the job just as well;
 - MySQL happily accepts literals for boolean column types,
   e.g. INSERT INTO table_with_boolean_column 
(boolean_column) 
 VALUES (0)
 - the same statement in PostgreSQL produced
ERROR:  Attribute 'boolean_column' is of type 'bool' but expression is of type 'int4'
 You will need to rewrite or cast the expression
 - so I did what it said and wrote 0::bool -  and thought
   no further of it, until now when I began the upgrade.
 - being in a bit of a hurry I put tried '0'::bool and it worked...
 - having rtfm, obviously just '0' and no mucking about with casting
   is better anyway...

Peter Eisentraut [EMAIL PROTECTED] wrote:
 Note that

 x  0

 is also a perfectly good way to convert integers to booleans, and a more
 portable one at that.

Ah, that is a useful tip. 

Thanks for the information

Ian Barwick
[EMAIL PROTECTED]


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] nested transactions

2002-11-28 Thread Bruce Momjian
pgman wrote:
  AFAICS the problem lies in updating several pg_clog bits at once.  How
  can this be done without holding a potentially long lasting lock?
 
 Yes, locking is one possible solution, but no one likes that.  One hack
 lock idea would be to create a subtransaction-only lock, so if you see
 the special 4-th xact state (about to be committed as part of a
 subtransaction) you have to wait on that lock (held by the backend
 twiddling the xact bits), then look again.  That basically would
 serialize all the bit-twiddling for subtransactions.  I am sure I am
 going to get a yuck from the audience on that one, but I am not sure
 how long that bit twiddling could take.  Does xact twiddle every cause
 I/O?  I think it could, which would be a pretty big performance problem.
 It would serialize the subtransaction commits _and_ block anyone trying
 to get the status of those subtransactions.  We would not use the the
 4th xid status during the transaction, only while we were twiddling the
 bits on commit.

Let me correct this.  Transaction state readers _don't_ have to block
while the subtransaction is twiddling bits.  Logic would be:

Set all aborted subtransaction status bits
Grab subxact lock
Set subxact global status bit to in progress
Set all subtransaction xids to SUBXACT_TO_COMMIT
Set subxact global status bit to committed (commit happens here)
Set all SUBXACT_TO_COMMIT xids to COMMITTED
Release subxact lock

Any transaction looking up a subtransaction that has an
SUBXACT_TO_COMMIT state has to consult the global subxact status bit,
which is a global variable in shared memory.

What this basically does is to funnel all subxid lookups into a single
global subxid status bit.  In fact, even the outer transaction has to be
set to SUBXACT_TO_COMMIT so it commits at the same time as the
subtransactions.

On crash recovery, all SUBXACT_TO_COMMIT have to be cleaned up, somehow,
perhaps using WAL.
 
The only downside to this approach is that subtransaction bit twiddling
is serialized.

This is an interesting idea becuase it has overhead only to backends
using subtransactions.  It does kill our multiple commits at the same
time that we can do with normal transactions.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



[HACKERS] Query performance. 7.2.3 Vs. 7.3

2002-11-28 Thread wade
  While playing with one of my DBs under 7.3 to make use of its better
explain features, I came across a query that runs significantly slower
under 7.3 than
7.2.3.  At first, I thought it would be a hardware issue, so i installed both
versions on the same box.  
7.2.3 tends to run the query in 80% of the time 7.3 does.
Explain output can be found at http://arch.wavefire.com/72v73a.txt

Please don't hesitate to drop me a line if you require more info.
 -Wade Klaver

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] Query performance. 7.2.3 Vs. 7.3

2002-11-28 Thread Tom Lane
wade [EMAIL PROTECTED] writes:
   While playing with one of my DBs under 7.3 to make use of its better
 explain features, I came across a query that runs significantly slower
 under 7.3 than
 7.2.3.  At first, I thought it would be a hardware issue, so i installed both
 versions on the same box.  
 7.2.3 tends to run the query in 80% of the time 7.3 does.
 Explain output can be found at http://arch.wavefire.com/72v73a.txt

The difference evidently is that 7.3 chooses a mergejoin where 7.2
picks a hashjoin.

AFAICT this must be a consequence of the reduction in mergejoin
estimated costs associated with this patch:

2002-02-28 23:09  tgl

* src/: backend/executor/nodeMergejoin.c,
backend/optimizer/path/costsize.c, backend/utils/adt/selfuncs.c,
backend/utils/cache/lsyscache.c, include/utils/lsyscache.h,
include/utils/selfuncs.h: Teach planner about the idea that a
mergejoin won't necessarily read both input streams to the end.  If
one variable's range is much less than the other, an
indexscan-based merge can win by not scanning all of the other
table.  Per example from Reinhard Max.

since we really didn't do anything else in 7.3 that changed the behavior
of costsize.c.

I can't get totally excited about a 20% estimation error (if the planner
was never off by more than that, I'd be overjoyed ;-)) ... but if you
want to dig into the statistics and try to figure out why this added
logic is misestimating in your particular case, I'd be interested to
hear.  Probably the first thing to look at is why the estimated row
counts are off by almost a factor of 3 for that join.

regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] Query performance. 7.2.3 Vs. 7.3

2002-11-28 Thread Neil Conway
On Thu, 2002-11-28 at 21:23, Tom Lane wrote:
 wade [EMAIL PROTECTED] writes:
  Explain output can be found at http://arch.wavefire.com/72v73a.txt
 
 The difference evidently is that 7.3 chooses a mergejoin where 7.2
 picks a hashjoin.

I was looking at this a bit in IRC, and I was more concerned by the fact
that 7.3 was 20% than 7.2 on the same hardware, when they both used the
same query plan (consider the data at the end of the URL above, after
the execution of 'SET enable_mergejoin = off;').

Also, is it expected that the cardinality estimates for join steps won't
be very accurate, right? (estimated: 19 rows, actual: 765 rows)

Cheers,

Neil
-- 
Neil Conway [EMAIL PROTECTED] || PGP Key ID: DB3C29FC




---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] Query performance. 7.2.3 Vs. 7.3

2002-11-28 Thread Tom Lane
Neil Conway [EMAIL PROTECTED] writes:
 I was looking at this a bit in IRC, and I was more concerned by the fact
 that 7.3 was 20% than 7.2 on the same hardware, when they both used the
 same query plan (consider the data at the end of the URL above, after
 the execution of 'SET enable_mergejoin = off;').

Hm.  Are we sure that both versions were built with the same
optimization level, etc?  (My private bet is that Wade's 7.2 didn't
have multibyte or locale support --- but that's a long shot when we
don't know the datatypes of the columns being joined on...)

 Also, is it expected that the cardinality estimates for join steps won't
 be very accurate, right? (estimated: 19 rows, actual: 765 rows)

Well, it'd be nice to do better --- I was hoping Wade would look into
why the row estimates were off so much.

regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] Hirarchical queries a la Oracle. Patch.

2002-11-28 Thread Hannu Krosing
On Thu, 2002-11-28 at 17:34, Evgen Potemkin wrote:
 thanks, it's VERY helpful.
 understanding SQL99 draft is a bit more difficult than i thought :)

You might also try to get DB2 installed somewhere (IIRC IBM gives out 
limited time developer copies).

It implements at least the basic recursive query (without requiring the word RECURSIVE 
:)

 
 --
 Hannu

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] Auto Vacuum Daemon (again...)

2002-11-28 Thread Shridhar Daithankar
On 28 Nov 2002 at 10:45, Tom Lane wrote:

 Matthew T. O'Connor [EMAIL PROTECTED] writes:
  interesting thought.  I think this boils down to how many knobs do we
  need to put on this system. It might make sense to say allow upto X
  concurrent vacuums, a 4 processor system might handle 4 concurrent
  vacuums very well.
 
 This is almost certainly a bad idea.  vacuum is not very
 processor-intensive, but it is disk-intensive.  Multiple vacuums running
 at once will suck more disk bandwidth than is appropriate for a
 background operation, no matter how sexy your CPU is.  I can't see
 any reason to allow more than one auto-scheduled vacuum at a time.

Hmm.. We would need to take care of that as well.. 

Bye
 Shridhar

--
In most countries selling harmful things like drugs is punishable.Then howcome 
people can sell Microsoft software and go unpunished?(By [EMAIL PROTECTED], 
Hasse Skrifvars)


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] nested transactions

2002-11-28 Thread Tom Lane
Bruce Momjian [EMAIL PROTECTED] writes:
 Yes, locking is one possible solution, but no one likes that.  One hack
 lock idea would be to create a subtransaction-only lock, so if you see
 the special 4-th xact state (about to be committed as part of a
 subtransaction) you have to wait on that lock (held by the backend
 twiddling the xact bits), then look again.  That basically would
 serialize all the bit-twiddling for subtransactions.  I am sure I am
 going to get a yuck from the audience on that one,

You sure are.

 but I am not sure
 how long that bit twiddling could take.  Does xact twiddle every cause
 I/O?

Yes, if the page of pg_clog you need to touch is not currently in a
buffer.  With a large transaction you might have hundreds of
subtransactions, which could take an unpleasantly long time to mark
all committed.

What's worse, I think the above proposal requires a *single* lock for
this purpose (if there's more than one, how shall the requestor know
which one to block on?) --- so you are serializing all transaction
commits that have subtransactions, with only one able to go through at
a time.  That will really, really not do; the performance will be way
worse than the chaining idea we discussed before.

 You could store the backend slot id in pg_clog rather than the parent
 xid and look up the status of the outer xid for that backend slot.  That
 would allow you to use 2 bytes, with a max of 16k backends.

This is also a bad idea, because backend slot ids are not stable (by the
time you look in PG_PROC, the slot may be occupied by a new, unrelated
backend process).

 But still, you have an interesting idea of just setting the bit to be I
 am a child.

That bit alone doesn't help; you need to know *whose* child.

AFAICS, the objection to putting parent xact IDs into pg_clog is
basically a performance issue: bigger clog means more I/O.  This is
surely true; but the alternatives proposed so far are worse.

regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] Hirarchical queries a la Oracle. Patch.

2002-11-28 Thread Tom Lane
Hannu Krosing [EMAIL PROTECTED] writes:
 On Thu, 2002-11-28 at 17:34, Evgen Potemkin wrote:
 understanding SQL99 draft is a bit more difficult than i thought :)

 You might also try to get DB2 installed somewhere (IIRC IBM gives out 
 limited time developer copies).

Even without DB2 installed, you can read the documentation for it on the
web.  There's quite a lot of useful info in IBM's docs.  For example
http://nscpcw.physics.upenn.edu/db2_docs/db2s0/withsel.htm
and the example starting at
http://nscpcw.physics.upenn.edu/db2_docs/db2s0/db2s0446.htm

(The docs are probably also readable directly from IBM, but this is the
first copy I found by googling...)

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] nested transactions

2002-11-28 Thread Bruce Momjian

I should add that I am not prepared to overhaul the pg_clog file format
as part of adding subtransactions for 7.4.  I can do the tid/sequential scan
method for abort, or the single-lock method described.

---

Tom Lane wrote:
 Bruce Momjian [EMAIL PROTECTED] writes:
  Yes, locking is one possible solution, but no one likes that.  One hack
  lock idea would be to create a subtransaction-only lock, so if you see
  the special 4-th xact state (about to be committed as part of a
  subtransaction) you have to wait on that lock (held by the backend
  twiddling the xact bits), then look again.  That basically would
  serialize all the bit-twiddling for subtransactions.  I am sure I am
  going to get a yuck from the audience on that one,
 
 You sure are.
 
  but I am not sure
  how long that bit twiddling could take.  Does xact twiddle every cause
  I/O?
 
 Yes, if the page of pg_clog you need to touch is not currently in a
 buffer.  With a large transaction you might have hundreds of
 subtransactions, which could take an unpleasantly long time to mark
 all committed.
 
 What's worse, I think the above proposal requires a *single* lock for
 this purpose (if there's more than one, how shall the requestor know
 which one to block on?) --- so you are serializing all transaction
 commits that have subtransactions, with only one able to go through at
 a time.  That will really, really not do; the performance will be way
 worse than the chaining idea we discussed before.
 
  You could store the backend slot id in pg_clog rather than the parent
  xid and look up the status of the outer xid for that backend slot.  That
  would allow you to use 2 bytes, with a max of 16k backends.
 
 This is also a bad idea, because backend slot ids are not stable (by the
 time you look in PG_PROC, the slot may be occupied by a new, unrelated
 backend process).
 
  But still, you have an interesting idea of just setting the bit to be I
  am a child.
 
 That bit alone doesn't help; you need to know *whose* child.
 
 AFAICS, the objection to putting parent xact IDs into pg_clog is
 basically a performance issue: bigger clog means more I/O.  This is
 surely true; but the alternatives proposed so far are worse.
 
   regards, tom lane
 

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[HACKERS] How to compile postgres source code in VC++

2002-11-28 Thread Prasanna Phadke
Can anybody explain me, how to compile postgres source code in VC++.
Catch all the cricket action. Download 
Yahoo! Score tracker

Re: [HACKERS] nested transactions

2002-11-28 Thread Tom Lane
Bruce Momjian [EMAIL PROTECTED] writes:
 I should add that I am not prepared to overhaul the pg_clog file format
 as part of adding subtransactions for 7.4.  I can do the tid/sequential scan
 method for abort, or the single-lock method described.

If you think that changing the pg_clog file format would be harder than
either of those other ideas, I think you're very badly mistaken.
pg_clog is touched only by one rather simple module.

I think the other methods will be completely unacceptable from a
performance point of view.  They could maybe work if subtransactions
were a seldom-used feature; but the people who want to use 'em are
mostly talking about a subtransaction for *every* command.  If you
design your implementation on the assumption that subtransactions are
infrequent, it will be unusably slow.

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] nested transactions

2002-11-28 Thread Bruce Momjian
Tom Lane wrote:
 Bruce Momjian [EMAIL PROTECTED] writes:
  I should add that I am not prepared to overhaul the pg_clog file format
  as part of adding subtransactions for 7.4.  I can do the tid/sequential scan
  method for abort, or the single-lock method described.
 
 If you think that changing the pg_clog file format would be harder than
 either of those other ideas, I think you're very badly mistaken.
 pg_clog is touched only by one rather simple module.

Agreed, the clog changes would be the simple solution.  However, I am
not sure I can make that level of changes.  In fact, the complexity of
having multiple transactions per backend is going to be tough for me
too.

Also, I should point out that balooning pg_clog by 16x is going to mean
we are perhaps 4-8x more likely to need extra pages to mark all
subtransactions.

Isn't there some other way we can link these subtransactions together
rather than mucking with pg_clog, as we only need the linkage while we
mark them all committed?

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] nested transactions

2002-11-28 Thread Tom Lane
Bruce Momjian [EMAIL PROTECTED] writes:
 Also, I should point out that balooning pg_clog by 16x is going to mean
 we are perhaps 4-8x more likely to need extra pages to mark all
 subtransactions.

So?  The critical point is that we don't need to serialize the pg_clog
operations if we do it that way.  Also, we can certainly expand the
number of pg_clog pages held in memory by some amount.  Right now it's
only 4, IIRC.  We could make it 64 and probably no one would even
notice.

 Isn't there some other way we can link these subtransactions together
 rather than mucking with pg_clog, as we only need the linkage while we
 mark them all committed?

You *cannot* expect to do it all in shared memory; you will be blown out
of the water by the first long transaction that comes along, if you try.
So the question is not whether we put the status into a file, it is only
what representation we choose.

Manfred suggested a separate log file (pg_subclog or some such) but
I really don't see any operational advantage to that.  You still end up
with 4 bytes per transaction, you're just assuming that putting them
in a different file makes it better.  I don't see how.

regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] nested transactions

2002-11-28 Thread Bruce Momjian
Tom Lane wrote:
  Isn't there some other way we can link these subtransactions together
  rather than mucking with pg_clog, as we only need the linkage while we
  mark them all committed?
 
 You *cannot* expect to do it all in shared memory; you will be blown out
 of the water by the first long transaction that comes along, if you try.
 So the question is not whether we put the status into a file, it is only
 what representation we choose.
 
 Manfred suggested a separate log file (pg_subclog or some such) but
 I really don't see any operational advantage to that.  You still end up
 with 4 bytes per transaction, you're just assuming that putting them
 in a different file makes it better.  I don't see how.

It only becomes better if we can throw away that file (or contents) when
the transaction completes and we have marked all the subtransactions as
completed.  We can't compress pg_clog if we store the parent info in
there.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] nested transactions

2002-11-28 Thread Tom Lane
Bruce Momjian [EMAIL PROTECTED] writes:
 It only becomes better if we can throw away that file (or contents) when
 the transaction completes and we have marked all the subtransactions as
 completed.  We can't compress pg_clog if we store the parent info in
 there.

But we already have a recycling mechanism for pg_clog.  AFAICS,
creating a parallel log file with a separate recycling mechanism is
a study in wasted effort.

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] nested transactions

2002-11-28 Thread Bruce Momjian
Tom Lane wrote:
 Bruce Momjian [EMAIL PROTECTED] writes:
  It only becomes better if we can throw away that file (or contents) when
  the transaction completes and we have marked all the subtransactions as
  completed.  We can't compress pg_clog if we store the parent info in
  there.
 
 But we already have a recycling mechanism for pg_clog.  AFAICS,
 creating a parallel log file with a separate recycling mechanism is
 a study in wasted effort.

But that recycling requires the vacuum of every database in the system. 
Do people do that frequently enough?

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] nested transactions

2002-11-28 Thread Tom Lane
Bruce Momjian [EMAIL PROTECTED] writes:
 Tom Lane wrote:
 But we already have a recycling mechanism for pg_clog.  AFAICS,
 creating a parallel log file with a separate recycling mechanism is
 a study in wasted effort.

 But that recycling requires the vacuum of every database in the system. 
 Do people do that frequently enough?

Once the auto vacuum code is in there, they won't have any choice ;-)

In any case, I saw no part of your proposal that removed the need for
vacuum, so what's your point?

regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster