Re: [mail] Re: [HACKERS] 7.4 Wishlist

2002-12-10 Thread Al Sutton
I'd like to show/register interest.

I can see it being very useful when combined with replication for situations
where the replicatiant databases are geographically seperated (i.e. Disaster
Recover sites or systems maintaining replicants in order to reduce the
distance from user to app to database). The bandwidth cost savings from
compressing the replication information would be immensly useful.

Al.

- Original Message -
From: "Joshua D. Drake" <[EMAIL PROTECTED]>
To: "Bruce Momjian" <[EMAIL PROTECTED]>
Cc: "Greg Copeland" <[EMAIL PROTECTED]>; "Al Sutton"
<[EMAIL PROTECTED]>; "Stephen L." <[EMAIL PROTECTED]>; "PostgresSQL Hackers
Mailing List" <[EMAIL PROTECTED]>
Sent: Tuesday, December 10, 2002 8:04 PM
Subject: Re: [mail] Re: [HACKERS] 7.4 Wishlist


> Hello,
>
>We would probably be open to contributing it if there was interest.
> There wasn't interest initially.
>
> Sincerely,
>
> Joshua Drake
>
>
> Bruce Momjian wrote:
> > Greg Copeland wrote:
> >
> >>On Tue, 2002-12-10 at 11:25, Al Sutton wrote:
> >>
> >>>Would it be possible to make compression an optional thing, with the
default
> >>>being off?
> >>>
> >>
> >>I'm not sure.  You'd have to ask Command Prompt (Mammoth) or wait to see
> >>what appears.  What I originally had envisioned was a per database and
> >>user permission model which would better control use.  Since compression
> >>can be rather costly for some use cases, I also envisioned it being
> >>negotiated where only the user/database combo with permission would be
> >>able to turn it on.  I do recall that compression negotiation is part of
> >>the Mammoth implementation but I don't know if it's a simple capability
> >>negotiation or part of a larger scheme.
> >
> >
> > I haven't heard anything about them contributing it.  Doesn't mean it
> > will not happen, just that I haven't heard it.
> >
> > I am not excited about per-db/user compression because of the added
> > complexity of setting it up, and even set up, I can see cases where some
> > queries would want it, and others not.  I can see using GUC to control
> > this.  If you enable it and the client doesn't support it, it is a
> > no-op.  We have per-db and per-user settings, so GUC would allow such
> > control if you wish.
> >
> > Ideally, it would be a tri-valued parameter, that is ON, OFF, or AUTO,
> > meaning it would determine if there was value in the compression and do
> > it only when it would help.
> >
>
> --
> CommandPrompt - http://www.commandprompt.com 
>+1.503.222-2783  
>
>



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] Reusing Dead Tuples:

2002-12-10 Thread Tom Lane
Janardhan <[EMAIL PROTECTED]> writes:
> if i am not wrong while  updating a tuple, we are also creating a  new 
> index entry .

Yes.

> so  if the
> tuple is dead then the index entry pointing it also a dead index tuple. 

Yes.

> so even if dead index tuple is not
> removed then also it should not break thing, since the dead index tuple 
> will not be used, am i correct?.

No.  A process running an indexscan will assume that the index tuple
accurately describes the heap tuple it is pointing at.  If the heap
tuple is live then it will be returned as satisfying the indexscan.

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] Reusing Dead Tuples:

2002-12-10 Thread Janardhan




Tom Lane wrote:

  Janardhan <[EMAIL PROTECTED]> writes:
  
  
Does it breaks anythings  by overwriting the dead tuples ?.

  
  
Yes.  You cannot do that unless you've first removed index entries
pointing at the dead tuples --- and jumped through the same locking
hoops that lazy vacuum does while removing index entries.

			regards, tom lane

  

if i am not wrong while  updating a tuple, we are also creating a  new
index entry . so  if the
tuple is dead then the index entry pointing it also a dead index tuple.
so even if dead index tuple is not 
removed then also it should not break thing, since the dead index tuple
will not be used, am i correct?.

what is reason why the dead heap tuples are maintained in a linked list
?. since for every dead heap tuple there
is a corresponding dead index tuple.

Regards
jana




Re: [HACKERS] DB Tuning Notes for comment...

2002-12-10 Thread Philip Warner
At 03:54 PM 9/12/2002 -0500, Tom Lane wrote:

I have some uncommitted patches concerning the FSM management heuristics
from Stephen Marshall, which I deemed too late/risky for 7.3, but we
should get something done for 7.4.  Anyone interested in playing around
in this area?


I'd be interested in seeing the patches, but can't commit to doing anything 
with them at this point. I would like to get to the bottom of the weird 
behaviour, however.




Philip Warner| __---_
Albatross Consulting Pty. Ltd.   |/   -  \
(A.B.N. 75 008 659 498)  |  /(@)   __---_
Tel: (+61) 0500 83 82 81 | _  \
Fax: (+61) 03 5330 3172  | ___ |
Http://www.rhyme.com.au  |/   \|
 |----
PGP key available upon request,  |  /
and from pgp5.ai.mit.edu:11371   |/


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html


Re: [HACKERS] Problems with ALTER DOMAIN patch

2002-12-10 Thread Tom Lane
Rod Taylor <[EMAIL PROTECTED]> writes:
> On Tue, 2002-12-10 at 22:56, Tom Lane wrote:
>> relation's pg_class row.  We have no such locks on types at present,
>> but I think it may be time to invent 'em.

> I'd be happy to use them once created.

I think you misunderstood me ;=) ... that was a none-too-subtle
suggestion that *you* should go invent 'em, seeing as how you're the
one pushing the feature that makes 'em necessary.

The lock manager itself deals with lock tags that could be almost
anything.  We currently only use lock tags that represent relations or
specific pages in relations, but I see no reason that there couldn't
also be lock tags representing types --- or other basic catalog entries.
(I am trying hard to repress the thought that we may already need
locking on other classes of entities as well.)  What we need now is a
little thought about exactly how to represent these different lock tags
(should be easy), and about what semantics to assign to different lock
modes applied to pg_type entities (perhaps not so easy).

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] Problems with ALTER DOMAIN patch

2002-12-10 Thread Tom Lane
Bruce Momjian <[EMAIL PROTECTED]> writes:
> Where does that leave the patch _until_ they are created?

I'd say "it's under death sentence unless fixed before 7.4 release".
I don't want to back it out in toto right now, because that will
interfere with other edits I'm in process of making (and also Rod
included some necessary fixes to the domain-constraint patch in the
alter-domain patch; which wasn't too clean of him but it's done).

For now, please put "fix or disable ALTER DOMAIN" on the must-do-
before-7.4 part of TODO.

regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] [BUGS] GEQO Triggers Server Crash

2002-12-10 Thread Tom Lane
Bruce Momjian <[EMAIL PROTECTED]> writes:
> Can we free only the plans we want to free in geqo?  I don't mind having
> a different free method in geqo vs. the rest of the optimizer.

GEQO calls "the rest of the optimizer", and the space that we're
worried about is almost all allocated in "the rest of the optimizer".
How are you going to implement two different free methods?

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] Reusing Dead Tuples:

2002-12-10 Thread Tom Lane
Janardhan <[EMAIL PROTECTED]> writes:
> Does it breaks anythings  by overwriting the dead tuples ?.

Yes.  You cannot do that unless you've first removed index entries
pointing at the dead tuples --- and jumped through the same locking
hoops that lazy vacuum does while removing index entries.

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] Problems with ALTER DOMAIN patch

2002-12-10 Thread Bruce Momjian
Rod Taylor wrote:
> > relation's pg_class row.  We have no such locks on types at present,
> > but I think it may be time to invent 'em.
> 
> I'd be happy to use them once created.
> 
> Thanks again for the help.

Where does that leave the patch _until_ they are created?

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] Problems with ALTER DOMAIN patch

2002-12-10 Thread Rod Taylor
On Tue, 2002-12-10 at 22:56, Tom Lane wrote:
> Rod Taylor <[EMAIL PROTECTED]> writes:
> >> 2. Insufficient locking, guise 2: there's no protection against someone
> >> else adding a column or table while you're processing an ALTER DOMAIN,
> >> either.  This means that constraint checks will be missed.  Example:
> 
> > Locking the entry in pg_type doesn't prevent that?
> 
> If there were such a thing as "locking the entry in pg_type", it might
> prevent that, but (a) there isn't, and (b) your code wouldn't invoke it 
> if there were.  Reading a row should surely not be tantamount to
> invoking an exclusive lock on it.

Hrm...  Yes.. I came to that conclusion while walking home. My concepts
of locking, and what actually happens in PostgreSQL are two completely
different things.

> In any case, other backends might have the pg_type entry in their
> syscaches, in which case their references to the type would be quite
> free of any actual read of the pg_type row that might fall foul of
> your hypothetical lock.

So... Basically I'm cooked.

> relation's pg_class row.  We have no such locks on types at present,
> but I think it may be time to invent 'em.

I'd be happy to use them once created.

Thanks again for the help.

-- 
Rod Taylor <[EMAIL PROTECTED]>

PGP Key: http://www.rbt.ca/rbtpub.asc



signature.asc
Description: This is a digitally signed message part


Re: [HACKERS] PQnotifies() in 7.3 broken?

2002-12-10 Thread Bruce Momjian

I have bumped minor versions for 7.3 and 7.4.  If we decide to do
something different later, fine, but this way we will not forget to have
some update for 7.3.

---

Bruce Momjian wrote:
> Tom Lane wrote:
> > Bruce Momjian <[EMAIL PROTECTED]> writes:
> > > Greg Copeland wrote:
> > >> Is it possible to automate this as part of the build
> > >> process so that they get grabbed from some version information during
> > >> the build?
> > 
> > > Version bump is one of the few things we do at the start of
> > > development.
> > 
> > The real problem here is that major version bump (signifying an
> > incompatible API change) is something that must NOT be done in an
> > automated, mindless-checklist way.  We should have executed the bump
> > when we agreed to change PQnotifies' API incompatibly.  We screwed up
> > on that.  I think it's correct to fix the error for 7.3.1 --- but we
> > cannot improve on the situation by making some procedural change to
> > "always do X at point Y in the release cycle".  Sometimes there's
> > no substitute for actual thinking :-(
> 
> Oh, a major bump.  I thought we did major bumps only in cases where a
> recompile will _not_ fix the problem, like changing a parameter value to
> a function or removing a function or something like that.
> 
> -- 
>   Bruce Momjian|  http://candle.pha.pa.us
>   [EMAIL PROTECTED]   |  (610) 359-1001
>   +  If your life is a hard drive, |  13 Roberts Road
>   +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073
> 
> ---(end of broadcast)---
> TIP 5: Have you checked our extensive FAQ?
> 
> http://www.postgresql.org/users-lounge/docs/faq.html
> 

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] PQnotifies() in 7.3 broken?

2002-12-10 Thread Bruce Momjian
Tom Lane wrote:
> Bruce Momjian <[EMAIL PROTECTED]> writes:
> > Greg Copeland wrote:
> >> Is it possible to automate this as part of the build
> >> process so that they get grabbed from some version information during
> >> the build?
> 
> > Version bump is one of the few things we do at the start of
> > development.
> 
> The real problem here is that major version bump (signifying an
> incompatible API change) is something that must NOT be done in an
> automated, mindless-checklist way.  We should have executed the bump
> when we agreed to change PQnotifies' API incompatibly.  We screwed up
> on that.  I think it's correct to fix the error for 7.3.1 --- but we
> cannot improve on the situation by making some procedural change to
> "always do X at point Y in the release cycle".  Sometimes there's
> no substitute for actual thinking :-(

Oh, a major bump.  I thought we did major bumps only in cases where a
recompile will _not_ fix the problem, like changing a parameter value to
a function or removing a function or something like that.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] PQnotifies() in 7.3 broken?

2002-12-10 Thread Tom Lane
Bruce Momjian <[EMAIL PROTECTED]> writes:
> Greg Copeland wrote:
>> Is it possible to automate this as part of the build
>> process so that they get grabbed from some version information during
>> the build?

> Version bump is one of the few things we do at the start of
> development.

The real problem here is that major version bump (signifying an
incompatible API change) is something that must NOT be done in an
automated, mindless-checklist way.  We should have executed the bump
when we agreed to change PQnotifies' API incompatibly.  We screwed up
on that.  I think it's correct to fix the error for 7.3.1 --- but we
cannot improve on the situation by making some procedural change to
"always do X at point Y in the release cycle".  Sometimes there's
no substitute for actual thinking :-(

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] Problems with ALTER DOMAIN patch

2002-12-10 Thread Tom Lane
Rod Taylor <[EMAIL PROTECTED]> writes:
>> 2. Insufficient locking, guise 2: there's no protection against someone
>> else adding a column or table while you're processing an ALTER DOMAIN,
>> either.  This means that constraint checks will be missed.  Example:

> Locking the entry in pg_type doesn't prevent that?

If there were such a thing as "locking the entry in pg_type", it might
prevent that, but (a) there isn't, and (b) your code wouldn't invoke it 
if there were.  Reading a row should surely not be tantamount to
invoking an exclusive lock on it.

In any case, other backends might have the pg_type entry in their
syscaches, in which case their references to the type would be quite
free of any actual read of the pg_type row that might fall foul of
your hypothetical lock.

To make this work in a reliable way, there needs to be some concept
of acquiring a lock on the type as an entity, in the same way that
LockRelation acquires a lock on a relation as an entity --- which has
only the loosest possible connection to the notion of a lock on the
relation's pg_class row.  We have no such locks on types at present,
but I think it may be time to invent 'em.

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] [INTERFACES] Patch for DBD::Pg pg_relcheck problem

2002-12-10 Thread Bruce Momjian
Tom Lane wrote:
> Ian Barwick <[EMAIL PROTECTED]> writes:
> > Sounds good to me. Is it on the todo-list? (Couldn't see it there).
> 
> Probably not; Bruce for some reason has resisted listing protocol change
> desires as an identifiable TODO category.  There are a couple of threads
> in the pghackers archives over the last year or so that discuss the
> different things we want to do, though.  (Improving the error-reporting
> framework and fixing the COPY protocol are a couple of biggies I can
> recall offhand.)

Listing protocol changes seemed too low-level for the TODO list, but I
have kept the email messages.  Today I updated the TODO list and added a
section for them.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] [BUGS] GEQO Triggers Server Crash

2002-12-10 Thread Bruce Momjian

Can we free only the plans we want to free in geqo?  I don't mind having
a different free method in geqo vs. the rest of the optimizer.

---

Tom Lane wrote:
> Kris Jurka <[EMAIL PROTECTED]> writes:
> > [ GEQO doesn't work anymore in CVS tip ]
> 
> Ugh.  The proximate cause of this is the code I added recently to cache
> repeated calculations of the best inner indexscan for a given inner
> relation with potential outer relations.  Since geqo_eval() releases
> all memory acquired during construction of a possible jointree, it
> releases the cached path info too.  The next attempt to use the data
> fails.
> 
> Naturally, ripping out the cache again doesn't strike me as an appealing
> solution.
> 
> The narrowest fix would be to hack best_inner_indexscan() to switch into
> the context containing the parent RelOptInfo while it makes a cache
> entry.  This seems kinda klugy but it would work.
> 
> I wonder if we'd be better off not trying to reclaim memory in
> geqo_eval.  Aside from presenting a constant risk of this sort of
> problem whenever someone hacks the optimizer, what it's really doing
> is discarding a whole lot of join cost estimates that are likely to
> be done over again in (some of) the following calls of geqo_eval.
> GEQO would certainly be a lot faster if we didn't release that info,
> and I'm not sure that the space cost would be as bad as the code
> comments claim.  Any thoughts?
> 
> This really just points up how messy memory management in the optimizer
> is at present.  I wonder if anyone has ideas on improving it ...
> 
>   regards, tom lane
> 
> ---(end of broadcast)---
> TIP 3: if posting/reading through Usenet, please send an appropriate
> subscribe-nomail command to [EMAIL PROTECTED] so that your
> message can get through to the mailing list cleanly
> 

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[HACKERS] Reusing Dead Tuples:

2002-12-10 Thread Janardhan
Hi,
  I am doing some experiments on dead tuples,  I am looking of reusing the
dead tuples  apace in a particular page during the "Update".This patch 
is meant for the tables
which are heavily updated to avoid vacuum very frequently.By  using it 
will arrest the size of
table for heavily updated table. The algorithm works like this:
1) During the update it check for the dead tuples in the current 
page(page that contain
the tuple that need to be updated). If it finds any dead tuples it uses 
the dead tuple space
by ovewriting on dead tuple. The checking of dead tuple is very similer 
to the task that of
lazy vaccum.
2) If it cannot find any dead tuple  it proceed as usual by inserting 
at the end of table .

Performance Effect:
 1) The CPU processing will be slighly more for the update, but io 
processing is
exactly same
2)  The size of table grows slower under heavy update , so vacuum is 
not required very frequently.
 The total processing for update is more or less same   even  after 
doing large number of updates without vacuum.

Does it breaks anythings  by overwriting the dead tuples ?.

Comments?.

jana



---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org


Re: [HACKERS] PQnotifies() in 7.3 broken?

2002-12-10 Thread Bruce Momjian
Greg Copeland wrote:
> Seems like a mistake was made.  Let's (don't ya love how that sounds
> like I'm actually involved in the fix? ;)  fix it sooner rather than
> later.
> 
> Just curious, after a release, how come the numbers are not
> automatically bumped to ensure this type thing gets caught sooner rather
> than later?  Is it possible to automate this as part of the build
> process so that they get grabbed from some version information during
> the build?

Version bump is one of the few things we do at the start of development.
For 7.2, I didn't actually stamp the 7.2 release so I never bumped them,
or I forgot.  Seems I also forgot for 7.1.  It is listed in
tools/RELEASE_CHANGES so it is just a matter of following that file.


-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] [mail] Re: 7.4 Wishlist

2002-12-10 Thread Greg Copeland
This has been brought up a couple of times now.  Feel free to search the
old archives for more information.  IIRC, it would of made the
implementation more problematic, or so I think it was said.

When I originally brought the topic (compression) up, it was not well
received.  As such, it may of been thought that additional effort on
such an implementation would not be worth the return on a feature which
most seemingly didn't see any purpose in supporting in the first place. 
You need to keep in mind that many simply advocated using a compressing
ssh tunnel.

Seems views may of changed some since then so it may be worth
revisiting.  Admittedly, I have no idea what would be required to move
the toast data all the way through like that.  Any idea?  Implementing a
compression stream (which seems like what was done for Mammoth) or even
packet level compression were both something that I could comfortably
put my arms around in a timely manner.  Moving toast data around wasn't.


Greg


On Tue, 2002-12-10 at 18:45, Kyle wrote:
> Without getting into too many details, why not send toast data to
> non-local clients?  Seems that would be the big win.  The data is
> already compressed, so the server wouldn't pay cpu time to recompress
> anything.  And since toast data is relatively large anyway, it's the
> stuff you'd want to compress before putting it on the wire anyway.
> 
> If this is remotely possible let me know, I might be interested in
> taking a look at it.
> 
> -Kyle
> 
> Bruce Momjian wrote:
> > 
> > I am not excited about per-db/user compression because of the added
> > complexity of setting it up, and even set up, I can see cases where some
> > queries would want it, and others not.  I can see using GUC to control
> > this.  If you enable it and the client doesn't support it, it is a
> > no-op.  We have per-db and per-user settings, so GUC would allow such
> > control if you wish.
> > 
> > Ideally, it would be a tri-valued parameter, that is ON, OFF, or AUTO,
> > meaning it would determine if there was value in the compression and do
> > it only when it would help.

-- 
Greg Copeland <[EMAIL PROTECTED]>
Copeland Computer Consulting


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] [mail] Re: 7.4 Wishlist

2002-12-10 Thread Kyle
Without getting into too many details, why not send toast data to
non-local clients?  Seems that would be the big win.  The data is
already compressed, so the server wouldn't pay cpu time to recompress
anything.  And since toast data is relatively large anyway, it's the
stuff you'd want to compress before putting it on the wire anyway.

If this is remotely possible let me know, I might be interested in
taking a look at it.

-Kyle

Bruce Momjian wrote:
> 
> I am not excited about per-db/user compression because of the added
> complexity of setting it up, and even set up, I can see cases where some
> queries would want it, and others not.  I can see using GUC to control
> this.  If you enable it and the client doesn't support it, it is a
> no-op.  We have per-db and per-user settings, so GUC would allow such
> control if you wish.
> 
> Ideally, it would be a tri-valued parameter, that is ON, OFF, or AUTO,
> meaning it would determine if there was value in the compression and do
> it only when it would help.

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] PQnotifies() in 7.3 broken?

2002-12-10 Thread Greg Copeland
Seems like a mistake was made.  Let's (don't ya love how that sounds
like I'm actually involved in the fix? ;)  fix it sooner rather than
later.

Just curious, after a release, how come the numbers are not
automatically bumped to ensure this type thing gets caught sooner rather
than later?  Is it possible to automate this as part of the build
process so that they get grabbed from some version information during
the build?

Greg


On Tue, 2002-12-10 at 17:36, Bruce Momjian wrote:
> OK, seeing that we don't have a third number, do people want me to
> increment the interface numbers for 7.3.1, or just wait for the
> increment in 7.4?
> 
> ---
> 
> Peter Eisentraut wrote:
> > Tom Lane writes:
> > 
> > > It is not real clear to me whether we need a major version bump, rather
> > > than a minor one.  We *do* need to signal binary incompatibility.  Who
> > > can clarify the rules here?
> > 
> > Strictly speaking, it's platform-dependent, but our shared library code
> > plays a bit of abuse with it.  What it comes down to is:
> > 
> > If you change or remove an interface, increment the major version number.
> > If you add an interface, increment the minor version number.  If you did
> > neither but changed the source code at all, increment the third version
> > number, if we had one.
> > 
> > To be thoroughly amused, read the libtool source.  Grep for 'version_type'.
> > 
> > -- 
> > Peter Eisentraut   [EMAIL PROTECTED]
> > 
> > 
-- 
Greg Copeland <[EMAIL PROTECTED]>
Copeland Computer Consulting


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] PQnotifies() in 7.3 broken?

2002-12-10 Thread Philip Warner
At 06:36 PM 10/12/2002 -0500, Bruce Momjian wrote:

do people want me to
increment the interface numbers for 7.3.1


I'd like it because I have to support & build against multiple versions.



Philip Warner| __---_
Albatross Consulting Pty. Ltd.   |/   -  \
(A.B.N. 75 008 659 498)  |  /(@)   __---_
Tel: (+61) 0500 83 82 81 | _  \
Fax: (+61) 03 5330 3172  | ___ |
Http://www.rhyme.com.au  |/   \|
 |----
PGP key available upon request,  |  /
and from pgp5.ai.mit.edu:11371   |/


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] PQnotifies() in 7.3 broken?

2002-12-10 Thread Bruce Momjian

OK, seeing that we don't have a third number, do people want me to
increment the interface numbers for 7.3.1, or just wait for the
increment in 7.4?

---

Peter Eisentraut wrote:
> Tom Lane writes:
> 
> > It is not real clear to me whether we need a major version bump, rather
> > than a minor one.  We *do* need to signal binary incompatibility.  Who
> > can clarify the rules here?
> 
> Strictly speaking, it's platform-dependent, but our shared library code
> plays a bit of abuse with it.  What it comes down to is:
> 
> If you change or remove an interface, increment the major version number.
> If you add an interface, increment the minor version number.  If you did
> neither but changed the source code at all, increment the third version
> number, if we had one.
> 
> To be thoroughly amused, read the libtool source.  Grep for 'version_type'.
> 
> -- 
> Peter Eisentraut   [EMAIL PROTECTED]
> 
> 

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [HACKERS] protocol change in 7.4

2002-12-10 Thread Bruce Momjian

I have added the following TODO item on protocol changes:

> * Wire Protocol Changes
>   o Show transaction status in psql
>   o Allow binding of query parameters, support for prepared queries
>   o Add optional textual message to NOTIFY
>   o Remove hard-coded limits on user/db/password names
>   o Remove unused elements of startup packet (unused, tty, passlength)
>   o Fix COPY/fastpath protocol?
>   o Replication support?
>   o Error codes
>   o Dynamic character set handling
>   o Special passing of binary values in platform-neutral format (bytea?)
>   o ecpg improvements?
>   o Add decoded type, length, precision

---

snpe wrote:
> On Thursday 07 November 2002 09:50 pm, korry wrote:
> > > > b)  Send a decoded version of atttypmod - specifically, decode the
> > > > precision and scale for numeric types.
> > >
> > >I want decode type,length,precision and scale
> >
> > Type is returned by PQftype(), length is returned by PQfsize().  Precision
> > and scale are encoded in the return value from PQfmod() and you have to
> > have a magic decoder ring to understand them. (Magic decoder rings are
> > available, you just have to read the source code :-)
> >
> > PQftype() is not easy to use because it returns an OID instead of a name
> > (or a standardized symbol), but I can't think of anything better to return
> > to the client.   Of course if you really want to make use of PQftype(), you
> > can preload a client-side cache of type definitions.  I seem to remember
> > seeing a patch a while back that would build the cache and decode precision
> > and scale too.
> >
> >   PQfsize() is entertaining, but not often what you really want (you really
> > want the width of the widest value in the column after conversion to some
> > string format - it seems reasonable to let the client applicatin worry
> > about that, although maybe that would be a useful client-side libpq
> > function).
> >
> >
> I want this in any catalog view
> 
> regards
> Haris Peco
> 
> ---(end of broadcast)---
> TIP 3: if posting/reading through Usenet, please send an appropriate
> subscribe-nomail command to [EMAIL PROTECTED] so that your
> message can get through to the mailing list cleanly
> 

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] Problems with ALTER DOMAIN patch

2002-12-10 Thread Rod Taylor
On Tue, 2002-12-10 at 12:39, Tom Lane wrote:
> I've been looking at the recently-committed ALTER DOMAIN patch, and I
> think it's got some serious if not fatal problems.  Specifically, the
> approach to adding/dropping constraints associated with domains doesn't
> work.
> 
> 1. Insufficient locking, guise 1: there's no protection against someone
> else dropping a column or whole table between the time you find a

Ok.. I obviously have to spend some time to figure out how locking works
and exactly what it affects.

I had incorrectly assumed that since dropping a column required removal
of the pg_attribute entry, that holding a RowExclusive on it would
prevent that.

> 2. Insufficient locking, guise 2: there's no protection against someone
> else adding a column or table while you're processing an ALTER DOMAIN,
> either.  This means that constraint checks will be missed.  Example:

Locking the entry in pg_type doesn't prevent that?  Afterall, something
does a test to see if the type exists prior to allowing the client to
add it.

> 3. Too much locking, guise 1: the ALTER DOMAIN command will acquire
> exclusive lock on every table that it scans, and will hold all these
> locks until it commits.  This can easily result in deadlocks --- against
> other ALTER DOMAIN commands, or just against any random other
> transaction that is unlucky enough to try to write any two tables
> touched by the ALTER DOMAIN.  AFAICS you don't need an exclusive lock,
> you just want to prevent updates of the table until the domain changes
> are committed, so ShareLock would be sufficient; that would reduce but
> not eliminate the risk of deadlock.

I noticed a completed TODO item that allows multiple locks to be
obtained simultaneously, and had intended on using that for this -- but
was having a hard time tracking down an example.

> 4. Too much locking, guise 2: the ExclusiveLock acquired on pg_class by
> get_rels_with_domain has no useful effect, since it's released again
> at the end of the scan; it does manage to shut down most sorts of schema
> changes while get_rels_with_domain runs, however.  This is bad enough,
> but:

Yeah... Trying to transfer the lock to the attributes -- which as you've
shown doesn't do what I thought.

> 5. Performance sucks.  In the regression database on my machine, "alter
> domain mydom set not null" takes over six seconds --- that's for a
> freshly created domain that's not used *anywhere*.  This can be blamed
> entirely on the inefficient implementation of get_rels_with_domain.

Yes, I need to (and intend to) redo this with dependencies, but hadn't
figured out how.   I'm surprised it took 6 seconds though.  I hardly
notice any delay on a database with ~30 tables in it.

> 6. Permission bogosity: as per discussion yesterday, ownership of a
> schema does not grant ownership rights on contained objects.

Patch submitted yesterday to correct this.

> 7. No mechanism for causing constraint changes to actually propagate
> after they are made.  This is more a fault of the design of the domain
> constraint patch than it is of the alter patch, but nonetheless alter is
> what exposes it.  The problem is particularly acute because you chose to
> insert a domain's constraint expressions into coercion operations at
> expression parsing time, which is far too early.  A stored rule that has
> a coerce-to-domain operation in it will have a frozen idea of what
> constraints it should be enforcing.  Probably the expression tree should
> just have a "CoerceToDomain foo" node in it, and at executor startup
> this node would have to look to the pg_type entry for foo to see exactly
> what it should be enforcing at the moment.

Thanks for the explanations.  I'll see if I can 1) fix my poor knowledge
of locking, 2) Add to my notes that I need to test stuff with Rules from
now on, and 3) correct the above items.
-- 
Rod Taylor <[EMAIL PROTECTED]>

PGP Key: http://www.rbt.ca/rbtpub.asc



signature.asc
Description: This is a digitally signed message part


Re: [HACKERS] Auto Vacuum Daemon (again...)

2002-12-10 Thread Greg Copeland
On Tue, 2002-12-10 at 13:09, scott.marlowe wrote:
> On 10 Dec 2002, Rod Taylor wrote:
> > Perhaps a more appropriate rule would be 1 AVD per tablespace?  Since
> > PostgreSQL only has a single tablespace at the moment
> 
> But Postgresql can already place different databases on different data 
> stores.  I.e. initlocation and all.  If someone was using multiple SCSI 
> cards with multiple JBOD or RAID boxes hanging off of a box, they would 
> have the same thing, effectively, that you are talking about.
> 
> So, someone out there may well be able to use a multiple process AVD right 
> now.  Imagine m databases on n different drive sets for large production 
> databases.


That's right.  I always forget about that.  So, it seems, regardless of
the namespace effort, we shouldn't be limiting the number of concurrent
AVD's.


-- 
Greg Copeland <[EMAIL PROTECTED]>
Copeland Computer Consulting


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[HACKERS] INFORMATION_SCHEMA

2002-12-10 Thread Christopher Kings-Lynne
> > We could do DESCRIBE commands as well.  Also, what happened to the
> > INFORMATION_SCHEMA proposal?  Wasn't Peter E doing something with that?
> > What happened to it?
>
> Ooops.  Yeah, let's get this in.  Where should I put it?

I wouldn't mind having a look at the patch.  Where do you implement this
kind of thing?  Where in the code do you create system views and schemas?
Just add to the initdb script or something?

Adding this should allow us to move around 20 items from the sql99
unsupported list to the supported, which would be sweet.

Chris


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



[HACKERS] INFORMATION_SCHEMA

2002-12-10 Thread Christopher Kings-Lynne
> > We could do DESCRIBE commands as well.  Also, what happened to the
> > INFORMATION_SCHEMA proposal?  Wasn't Peter E doing something with that?
> > What happened to it?
>
> Ooops.  Yeah, let's get this in.  Where should I put it?

I wouldn't mind having a look at the patch.  Where do you implement this
kind of thing?  Where in the code do you create system views and schemas?
Just add to the initdb script or something?

Adding this should allow us to move around 20 items from the sql99
unsupported list to the supported, which would be sweet.

Chris


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [mail] Re: [HACKERS] 7.4 Wishlist

2002-12-10 Thread Bruce Momjian

Yes, the issue was that give our TODO list, compressed transfer wasn't
very high, and it was unknown how valuable it would be.  However, if it
were contributed, we could easily test its value with little work on our
part and include the code if it were a win.

---

Joshua D. Drake wrote:
> Hello,
> 
>We would probably be open to contributing it if there was interest. 
> There wasn't interest initially.
> 
> Sincerely,
> 
> Joshua Drake
> 
> 
> Bruce Momjian wrote:
> > Greg Copeland wrote:
> > 
> >>On Tue, 2002-12-10 at 11:25, Al Sutton wrote:
> >>
> >>>Would it be possible to make compression an optional thing, with the default
> >>>being off?
> >>>
> >>
> >>I'm not sure.  You'd have to ask Command Prompt (Mammoth) or wait to see
> >>what appears.  What I originally had envisioned was a per database and
> >>user permission model which would better control use.  Since compression
> >>can be rather costly for some use cases, I also envisioned it being
> >>negotiated where only the user/database combo with permission would be
> >>able to turn it on.  I do recall that compression negotiation is part of
> >>the Mammoth implementation but I don't know if it's a simple capability
> >>negotiation or part of a larger scheme.
> > 
> > 
> > I haven't heard anything about them contributing it.  Doesn't mean it
> > will not happen, just that I haven't heard it.
> > 
> > I am not excited about per-db/user compression because of the added
> > complexity of setting it up, and even set up, I can see cases where some
> > queries would want it, and others not.  I can see using GUC to control
> > this.  If you enable it and the client doesn't support it, it is a
> > no-op.  We have per-db and per-user settings, so GUC would allow such
> > control if you wish.
> > 
> > Ideally, it would be a tri-valued parameter, that is ON, OFF, or AUTO,
> > meaning it would determine if there was value in the compression and do
> > it only when it would help.
> > 
> 
> -- 
> CommandPrompt- http://www.commandprompt.com  
>+1.503.222-2783  
> 
> 

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [mail] Re: [HACKERS] 7.4 Wishlist

2002-12-10 Thread Greg Copeland
On Tue, 2002-12-10 at 13:38, Bruce Momjian wrote:

> I haven't heard anything about them contributing it.  Doesn't mean it
> will not happen, just that I haven't heard it.
> 

This was in non-mailing list emails that I was told this by Joshua Drake
at Command Prompt.  Of course, that doesn't have to mean it will be
donated for sure but nonetheless, I was told it will be.

Here's a quote from one of the emails.  I don't think I'll be too far
out of line posting this.  On August 9, 2002, Joshua Drake said, "One we
plan on releasing this code to the developers after 7.3 comes out. We
want to be good members of the community but we have to keep a slight
commercial edge (wait to you see what we are going to do to vacuum)."

Obviously, I don't think that was official speak, so I'm not holding
them to the fire, nonetheless, that's what was said.  Additional follow
ups did seem to imply that they were very serious about this and REALLY
want to play nice as good shared source citizens.


> I am not excited about per-db/user compression because of the added
> complexity of setting it up, and even set up, I can see cases where some
> queries would want it, and others not.  I can see using GUC to control
> this.  If you enable it and the client doesn't support it, it is a
> no-op.  We have per-db and per-user settings, so GUC would allow such
> control if you wish.
> 

I never thought beyond the need for what form an actual implementation
of this aspect would look like.  The reason for such a concept would be
to simply limit the number of users that can be granted compression.  If
you have a large user base all using compression or even a small user
base where very large result sets are common, I can imagine your
database server becoming CPU bound.  The database/user thinking was an
effort to allow the DBA to better manage the CPU effect.

> Ideally, it would be a tri-valued parameter, that is ON, OFF, or AUTO,
> meaning it would determine if there was value in the compression and do
> it only when it would help.

Yes, that makes sense and was something I had originally envisioned. 
Simply stated, some installations may never want compression while
others may want it for every connection.  Beyond that, I believe there
needs to be something of a happy medium where a DBA can better control
who and what is taking his CPU away (e.g. only that one remote location
being fed via ISDN).  If GUC can fully satisfy, I certainly won't argue
against it.


-- 
Greg Copeland <[EMAIL PROTECTED]>
Copeland Computer Consulting


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] psql's \d commands --- end of the line for 1-character identifiers?

2002-12-10 Thread Tom Lane
Peter Eisentraut <[EMAIL PROTECTED]> writes:
> Christopher Kings-Lynne writes:
>> We could do DESCRIBE commands as well.  Also, what happened to the
>> INFORMATION_SCHEMA proposal?  Wasn't Peter E doing something with that?
>> What happened to it?

> Ooops.  Yeah, let's get this in.  Where should I put it?

How do you mean "where"?  The spec says it's gotta be called
information_schema, no?  What's left to decide?

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] Geometry regression tests (was Re: [PATCHES] Alter domain)

2002-12-10 Thread Bruce Momjian
Tom Lane wrote:
> Bruce Momjian <[EMAIL PROTECTED]> writes:
> > Tom Lane wrote:
> >> That's a pain.  Is there no way for config.guess to tell the difference
> >> between your system and the -STABLE versions?
> 
> > As I remember, the issue is that the only info is in a system header
> > file.
> 
> This is a bit of a kluge, but what about switching geometry over to the
> style Peter set up for locale differences?  Instead of calling out
> platform-by-platform expected files, we could arrange it so that
> pg_regress will accept a match on-the-fly to either of two (or more,
> but I think two will be enough now) geometry.out files.  Essentially
> we'd be saying that we don't really care whether specific platforms
> show positive or negative zeroes in that test.

I have to say I like this approach.  ALl our regression files are valid
in some way.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[HACKERS] Geometry regression tests (was Re: [PATCHES] Alter domain)

2002-12-10 Thread Tom Lane
Bruce Momjian <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> That's a pain.  Is there no way for config.guess to tell the difference
>> between your system and the -STABLE versions?

> As I remember, the issue is that the only info is in a system header
> file.

This is a bit of a kluge, but what about switching geometry over to the
style Peter set up for locale differences?  Instead of calling out
platform-by-platform expected files, we could arrange it so that
pg_regress will accept a match on-the-fly to either of two (or more,
but I think two will be enough now) geometry.out files.  Essentially
we'd be saying that we don't really care whether specific platforms
show positive or negative zeroes in that test.

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] pg_hba.conf parse error gives wrong line number

2002-12-10 Thread Bruce Momjian

I see the problem with the line number here.  I will work on a fix now. 
Thanks.


---

Oliver Elphick wrote:
> With this pg_hba.conf (line numbers from vi, of course):
> 
>   48 # TYPE  DATABASEUSERIP-ADDRESS   IP-MASK  METHOD 49 
>   50 local   all all   ident 
>sameuser
>   51 hostall 127.0.0.1127.0.0.1ident s   
> ameuser
>   52 
> 
> we naturally get a parse error because of the missing user column entry
> in line 51.  But in the log we see:
> 
> Dec 10 19:27:42 linda postgres[10944]: [8] LOG:  parse_hba: invalid
> syntax in pg_hba.conf file at line 95, token "ident"
> 
> In a more complicated file, a bogus line number is going to make
> debugging very tricky.  I tried following this in gdb, but haven't
> managed to track it through the fork of the new backend.
> 
> -- 
> Oliver Elphick[EMAIL PROTECTED]
> Isle of Wight, UK http://www.lfix.co.uk/oliver
> GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839  932A 614D 4C34 3E1D 0C1C
>  
>  "I beseech you therefore, brethren, by the mercies of 
>   God, that ye present your bodies a living sacrifice, 
>   holy, acceptable unto God, which is your reasonable 
>   service."   Romans 12:1 
> 
> 
> ---(end of broadcast)---
> TIP 4: Don't 'kill -9' the postmaster
> 

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] Croatian language file for 7.3

2002-12-10 Thread Peter Eisentraut
Done.

-- 
Peter Eisentraut   [EMAIL PROTECTED]


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] psql's \d commands --- end of the line for 1-character identifiers?

2002-12-10 Thread Peter Eisentraut
Christopher Kings-Lynne writes:

> We could do DESCRIBE commands as well.  Also, what happened to the
> INFORMATION_SCHEMA proposal?  Wasn't Peter E doing something with that?
> What happened to it?

Ooops.  Yeah, let's get this in.  Where should I put it?

-- 
Peter Eisentraut   [EMAIL PROTECTED]


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] psql's \d commands --- end of the line for

2002-12-10 Thread Peter Eisentraut
Alvaro Herrera writes:

> Would it work to make \d tab-completable in a way that showed both the
> commands that are available and the objects they describe? e.g.
>
> \d would show something like
> \dt [tables]  \ds [sequences] \dv [views] ...

That won't work.  The actual completion and the view of the alternatives
if the completion is ambiguous is driven by the same data.

-- 
Peter Eisentraut   [EMAIL PROTECTED]


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] psql's \d commands --- end of the line for

2002-12-10 Thread Christopher Kings-Lynne


> At 01:25 AM 10/12/2002 -0500, Tom Lane wrote:
> >Let's
> >get a bit realistic on the ease-of-typing arguments here.
>
> It's a fair cop, but don't forget the memory argument as well - I did say
I
> was happy with \d providing prompts, and DESCRIBE is a little more
> portable & memorable than \d[heiroglyphic].

I think the problem with DESCRIBE is that it's supposed to just return a
recordset.  I don't see it showing fk's, indexes, rules, etc. as well...

Chris


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [mail] Re: [HACKERS] 7.4 Wishlist

2002-12-10 Thread Bruce Momjian
Greg Copeland wrote:
> On Tue, 2002-12-10 at 11:25, Al Sutton wrote:
> > Would it be possible to make compression an optional thing, with the default
> > being off?
> > 
> 
> I'm not sure.  You'd have to ask Command Prompt (Mammoth) or wait to see
> what appears.  What I originally had envisioned was a per database and
> user permission model which would better control use.  Since compression
> can be rather costly for some use cases, I also envisioned it being
> negotiated where only the user/database combo with permission would be
> able to turn it on.  I do recall that compression negotiation is part of
> the Mammoth implementation but I don't know if it's a simple capability
> negotiation or part of a larger scheme.

I haven't heard anything about them contributing it.  Doesn't mean it
will not happen, just that I haven't heard it.

I am not excited about per-db/user compression because of the added
complexity of setting it up, and even set up, I can see cases where some
queries would want it, and others not.  I can see using GUC to control
this.  If you enable it and the client doesn't support it, it is a
no-op.  We have per-db and per-user settings, so GUC would allow such
control if you wish.

Ideally, it would be a tri-valued parameter, that is ON, OFF, or AUTO,
meaning it would determine if there was value in the compression and do
it only when it would help.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[HACKERS] pg_hba.conf parse error gives wrong line number

2002-12-10 Thread Oliver Elphick
With this pg_hba.conf (line numbers from vi, of course):

  48 # TYPE  DATABASEUSERIP-ADDRESS   IP-MASK  METHOD 49 
  50 local   all all   ident 
sameuser
  51 hostall 127.0.0.1127.0.0.1ident s
ameuser
  52 

we naturally get a parse error because of the missing user column entry
in line 51.  But in the log we see:

Dec 10 19:27:42 linda postgres[10944]: [8] LOG:  parse_hba: invalid
syntax in pg_hba.conf file at line 95, token "ident"

In a more complicated file, a bogus line number is going to make
debugging very tricky.  I tried following this in gdb, but haven't
managed to track it through the fork of the new backend.

-- 
Oliver Elphick[EMAIL PROTECTED]
Isle of Wight, UK http://www.lfix.co.uk/oliver
GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839  932A 614D 4C34 3E1D 0C1C
 
 "I beseech you therefore, brethren, by the mercies of 
  God, that ye present your bodies a living sacrifice, 
  holy, acceptable unto God, which is your reasonable 
  service."   Romans 12:1 


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] PostgreSQL 7.3 Installation on SCO

2002-12-10 Thread Bruce Momjian

OK, I wonder if adding -ldl will help.  You need to link to the library
containing the ldopen function.

---

Shibashish wrote:
> Thanks for the help. I edited the src/makefiles/Makefile.sco and removed
> the export. But the compile still hangs up after the following errors.
> I tried the following combinations too 
> export_dynamic = -Wl,-Bexport
> export_dynamic = -Wl
> #export_dynamic = -Wl,-Bexport {stops at the following output}
> 
> I will send the full output if u need.
> 
> -
> make[4]: Leaving directory
> `/data/postgres/postgresql-7.3/src/backend/utils/mb'
> /usr/ccs/bin/ld -r -o SUBSYS.o fmgrtab.o adt/SUBSYS.o cache/SUBSYS.o
> error/SUBSYS.o fmgr/SUBSYS.o hash/SUBSYS.o init/SUBSYS.o misc/SUBSYS.o
> mmgr/SUBSYS.o sort/SUBSYS.o time/SUBSYS.o mb/SUBSYS.o
> make[3]: Leaving directory
> `/data/postgres/postgresql-7.3/src/backend/utils'
> gcc -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -L../../src/port
> access/SUBSYS.o bootstrap/SUBSYS.o catalog/SUBSYS.o parser/SUBSYS.o
> commands/SUBSYS.o executor/SUBSYS.o lib/SUBSYS.o libpq/SUBSYS.o
> main/SUBSYS.o nodes/SUBSYS.o optimizer/SUBSYS.o port/SUBSYS.o
> postmaster/SUBSYS.o regex/SUBSYS.o rewrite/SUBSYS.o storage/SUBSYS.o
> tcop/SUBSYS.o utils/SUBSYS.o -lPW -lgen -lld -lsocket -lnsl -lm  -lpgport
> -o postgres
> undefined   first referenced
>  symbol in file
> _dlopen utils/SUBSYS.o
> _dlerrorutils/SUBSYS.o
> _dlsym  utils/SUBSYS.o
> _dlcloseutils/SUBSYS.o
> i386ld fatal: Symbol referencing errors. No output written to postgres
> make[2]: *** [postgres] Error 1
> make[2]: Leaving directory `/data/postgres/postgresql-7.3/src/backend'
> make[1]: *** [all] Error 2
> make[1]: Leaving directory `/data/postgres/postgresql-7.3/src'
> make: *** [all] Error 2
> -
> 
> from Shibashish
> 
> 
> On Mon, 9 Dec 2002, Bruce Momjian wrote:
> 
> >
> > It should have worked, but edit Makefile.shlib and remove that offending
> > export from the link line.  That may fix it.
> >
> > ---
> 
> 
> 

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [mail] Re: [HACKERS] 7.4 Wishlist

2002-12-10 Thread Greg Copeland
On Tue, 2002-12-10 at 11:25, Al Sutton wrote:
> Would it be possible to make compression an optional thing, with the default
> being off?
> 

I'm not sure.  You'd have to ask Command Prompt (Mammoth) or wait to see
what appears.  What I originally had envisioned was a per database and
user permission model which would better control use.  Since compression
can be rather costly for some use cases, I also envisioned it being
negotiated where only the user/database combo with permission would be
able to turn it on.  I do recall that compression negotiation is part of
the Mammoth implementation but I don't know if it's a simple capability
negotiation or part of a larger scheme.

The reason I originally imagined a user/database type approach is
because I would think only a subset of a typical installation would be
needing compression.  As such, this would help prevent users from
arbitrarily chewing up database CPU compressing data because:
o datasets are uncompressable or poorly compresses
o environment cpu is at a premium
o is in a bandwidth rich environment


> I'm in a position that many others may be in where the link between my app
> server and my database server isn't the bottleneck, and thus any time spent
> by the CPU performing compression and decompression tasks is CPU time that
> is in effect wasted.

Agreed.  This is why I'd *guess* that Mammoth's implementation does not
force compression.

> 
> If a database is handling numerous small queries/updates and the
> request/response packets are compressed individually, then the overhead of
> compression and decompression may result in slower performance compared to
> leaving the request/response packets uncompressed.

Again, this is where I'm gray on their exact implementation.  It's
possible they implemented a compressed stream even though I'm hoping
they implemented a per packet compression scheme (because adaptive
compression becomes much more capable and powerful; in both
algorithmically and logistical use).  An example of this would be to
avoid any compression on trivially sized result sets. Again, this is
another area where I can imagine some tunable parameters.

Just to be on the safe side, I'm cc'ing Josh Drake at Command Prompt
(Mammoth) to see what they can offer up on it.  Hope you guys don't
mind.


Greg



> - Original Message -
> From: "Greg Copeland" <[EMAIL PROTECTED]>
> To: "Stephen L." <[EMAIL PROTECTED]>
> Cc: "PostgresSQL Hackers Mailing List" <[EMAIL PROTECTED]>
> Sent: Tuesday, December 10, 2002 4:56 PM
> Subject: [mail] Re: [HACKERS] 7.4 Wishlist
> 
> 
> > On Tue, 2002-12-10 at 09:36, Stephen L. wrote:
> > > 6. Compression between client/server interface like in MySQL
> > >
> >
> > Mammoth is supposed to be donating their compression efforts back to
> > this project, or so I've been told.  I'm not exactly sure of their
> > time-line as I've slept since my last conversation with them.  The
> > initial feedback that I've gotten back from them on this subject is that
> > the compression has been working wonderfully for them with excellent
> > results.  IIRC, in their last official release, they announced their
> > compression implementation.  So, I'd think that it would be available
> > for 7.4 of 7.5 time frame.
> >
> >
> > --
> > Greg Copeland <[EMAIL PROTECTED]>
> > Copeland Computer Consulting
> >
> >
> > ---(end of broadcast)---
> > TIP 4: Don't 'kill -9' the postmaster
> >
-- 
Greg Copeland <[EMAIL PROTECTED]>
Copeland Computer Consulting


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] Auto Vacuum Daemon (again...)

2002-12-10 Thread scott.marlowe
On 10 Dec 2002, Rod Taylor wrote:

> > > Not sure what you mean by that, but it sounds like the behaviour of my AVD 
> > > (having it block until the vacuum command completes) is fine, and perhaps 
> > > preferrable. 
> > 
> > I can easily imagine larger systems with multiple CPUs and multiple disk
> > and card bundles to support multiple databases.  In this case, I have a
> > hard time figuring out why you'd not want to allow multiple concurrent
> > vacuums.  I guess I can understand a recommendation of only allowing a
> > single vacuum, however, should it be mandated that AVD will ONLY be able
> > to perform a single vacuum at a time?
> 
> Hmm.. CPU time (from what I've seen) isn't an issue.  Strictly disk. The
> big problem with multiple vacuums is determining which tables are in
> common areas.
> 
> Perhaps a more appropriate rule would be 1 AVD per tablespace?  Since
> PostgreSQL only has a single tablespace at the moment

But Postgresql can already place different databases on different data 
stores.  I.e. initlocation and all.  If someone was using multiple SCSI 
cards with multiple JBOD or RAID boxes hanging off of a box, they would 
have the same thing, effectively, that you are talking about.

So, someone out there may well be able to use a multiple process AVD right 
now.  Imagine m databases on n different drive sets for large production 
databases.


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] Let's create a release team

2002-12-10 Thread Bruce Momjian
Dan Langille wrote:
> > But if you want to try to document the process better, there are some
> > details written down already (eg, src/tools/RELEASE_CHANGES) and I'm
> > sure Marc and Bruce would cooperate in writing down more.
> 
> That's a good start. It looks like a list of things easily forgotten 
> but if forgotten, make us look bad.

There's not much I can add to that list.  It is everything I normally
check.  Of course, Marc does a whole bunch of other things, but I am not
involved in that.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] Auto Vacuum Daemon (again...)

2002-12-10 Thread Rod Taylor
On Tue, 2002-12-10 at 12:00, Greg Copeland wrote:
> On Tue, 2002-12-10 at 08:42, Rod Taylor wrote:
> > > > Not sure what you mean by that, but it sounds like the behaviour of my AVD 
> > > > (having it block until the vacuum command completes) is fine, and perhaps 
> > > > preferrable. 
> > > 
> > > I can easily imagine larger systems with multiple CPUs and multiple disk
> > > and card bundles to support multiple databases.  In this case, I have a
> > > hard time figuring out why you'd not want to allow multiple concurrent
> > > vacuums.  I guess I can understand a recommendation of only allowing a
> > > single vacuum, however, should it be mandated that AVD will ONLY be able
> > > to perform a single vacuum at a time?
> > 
> > Hmm.. CPU time (from what I've seen) isn't an issue.  Strictly disk. The
> > big problem with multiple vacuums is determining which tables are in
> > common areas.
> > 
> > Perhaps a more appropriate rule would be 1 AVD per tablespace?  Since
> > PostgreSQL only has a single tablespace at the moment
> 
> But tablespace is planned for 7.4 right?  Since tablespace is supposed
> to go in for 7.4, I think you've hit the nail on the head.  One AVD per
> tablespace sounds just right to me.

Planned if someone implements it and manages to have it committed prior
to release.

-- 
Rod Taylor <[EMAIL PROTECTED]>

PGP Key: http://www.rbt.ca/rbtpub.asc



signature.asc
Description: This is a digitally signed message part


[HACKERS] Problems with ALTER DOMAIN patch

2002-12-10 Thread Tom Lane
I've been looking at the recently-committed ALTER DOMAIN patch, and I
think it's got some serious if not fatal problems.  Specifically, the
approach to adding/dropping constraints associated with domains doesn't
work.

1. Insufficient locking, guise 1: there's no protection against someone
else dropping a column or whole table between the time you find a
pg_attribute entry in get_rels_with_domain and the time you actually
process it in AlterDomainNotNull or AlterDomainAddConstraint.  The code
appears to think that holding RowExclusiveLock on pg_attribute will
protect it somehow, but that doesn't (and shouldn't) do any such thing.
This will result in at best an elog and at worst coredump when you try
to scan the no-longer-present table or column.

2. Insufficient locking, guise 2: there's no protection against someone
else adding a column or table while you're processing an ALTER DOMAIN,
either.  This means that constraint checks will be missed.  Example:

<< backend 1 >>

regression=# create domain mydom int4;
CREATE DOMAIN
regression=# begin;
BEGIN
regression=# alter domain mydom set not null;
ALTER DOMAIN

<< don't commit yet; in backend 2 do >>

regression=# create table foo (f1 mydom);
CREATE TABLE
regression=# insert into foo values(null);
INSERT 149688 1

<< now in backend 1: >>

regression=# commit;
COMMIT

<< now in backend 2: >>

regression=# insert into foo values(null);
ERROR:  Domain mydom does not allow NULL values
regression=# select * from foo;
 f1


(1 row)

Not a very watertight domain constraint, is it?  The begin/commit is not
necessary to cause a failure, it just makes it easy to make the window
for failure wide enough to hit in a manually entered example.

3. Too much locking, guise 1: the ALTER DOMAIN command will acquire
exclusive lock on every table that it scans, and will hold all these
locks until it commits.  This can easily result in deadlocks --- against
other ALTER DOMAIN commands, or just against any random other
transaction that is unlucky enough to try to write any two tables
touched by the ALTER DOMAIN.  AFAICS you don't need an exclusive lock,
you just want to prevent updates of the table until the domain changes
are committed, so ShareLock would be sufficient; that would reduce but
not eliminate the risk of deadlock.

4. Too much locking, guise 2: the ExclusiveLock acquired on pg_class by
get_rels_with_domain has no useful effect, since it's released again
at the end of the scan; it does manage to shut down most sorts of schema
changes while get_rels_with_domain runs, however.  This is bad enough,
but:

5. Performance sucks.  In the regression database on my machine, "alter
domain mydom set not null" takes over six seconds --- that's for a
freshly created domain that's not used *anywhere*.  This can be blamed
entirely on the inefficient implementation of get_rels_with_domain.
In a database with more tables performance would get much worse; it's
basically O(N^2).  And it's holding ExclusiveLock on pg_class the whole
time :-(.  (A reasonably efficient way to make the same search would be
to use pg_depend to look for columns that depend on the domain type ---
this might find a few indirect dependencies, but it would certainly be
lots faster than repeated seqscans over pg_attribute.)

6. Permission bogosity: as per discussion yesterday, ownership of a
schema does not grant ownership rights on contained objects.

7. No mechanism for causing constraint changes to actually propagate
after they are made.  This is more a fault of the design of the domain
constraint patch than it is of the alter patch, but nonetheless alter is
what exposes it.  The problem is particularly acute because you chose to
insert a domain's constraint expressions into coercion operations at
expression parsing time, which is far too early.  A stored rule that has
a coerce-to-domain operation in it will have a frozen idea of what
constraints it should be enforcing.  Probably the expression tree should
just have a "CoerceToDomain foo" node in it, and at executor startup
this node would have to look to the pg_type entry for foo to see exactly
what it should be enforcing at the moment.


Some of these are fixable, but I don't actually see any fix for point 2
short of creating some entirely new locking convention.  Currently, only
relations can be locked, but you'd really need an enforceable lock on
the type itself to make a watertight solution, I think.  Since we've
never had any sort of supported ALTER TYPE operation before, the issue
hasn't come up before ...

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [mail] Re: [HACKERS] 7.4 Wishlist

2002-12-10 Thread Al Sutton
Would it be possible to make compression an optional thing, with the default
being off?

I'm in a position that many others may be in where the link between my app
server and my database server isn't the bottleneck, and thus any time spent
by the CPU performing compression and decompression tasks is CPU time that
is in effect wasted.

If a database is handling numerous small queries/updates and the
request/response packets are compressed individually, then the overhead of
compression and decompression may result in slower performance compared to
leaving the request/response packets uncompressed.

Al.

- Original Message -
From: "Greg Copeland" <[EMAIL PROTECTED]>
To: "Stephen L." <[EMAIL PROTECTED]>
Cc: "PostgresSQL Hackers Mailing List" <[EMAIL PROTECTED]>
Sent: Tuesday, December 10, 2002 4:56 PM
Subject: [mail] Re: [HACKERS] 7.4 Wishlist


> On Tue, 2002-12-10 at 09:36, Stephen L. wrote:
> > 6. Compression between client/server interface like in MySQL
> >
>
> Mammoth is supposed to be donating their compression efforts back to
> this project, or so I've been told.  I'm not exactly sure of their
> time-line as I've slept since my last conversation with them.  The
> initial feedback that I've gotten back from them on this subject is that
> the compression has been working wonderfully for them with excellent
> results.  IIRC, in their last official release, they announced their
> compression implementation.  So, I'd think that it would be available
> for 7.4 of 7.5 time frame.
>
>
> --
> Greg Copeland <[EMAIL PROTECTED]>
> Copeland Computer Consulting
>
>
> ---(end of broadcast)---
> TIP 4: Don't 'kill -9' the postmaster
>



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] Auto Vacuum Daemon (again...)

2002-12-10 Thread Greg Copeland
On Tue, 2002-12-10 at 08:42, Rod Taylor wrote:
> > > Not sure what you mean by that, but it sounds like the behaviour of my AVD 
> > > (having it block until the vacuum command completes) is fine, and perhaps 
> > > preferrable. 
> > 
> > I can easily imagine larger systems with multiple CPUs and multiple disk
> > and card bundles to support multiple databases.  In this case, I have a
> > hard time figuring out why you'd not want to allow multiple concurrent
> > vacuums.  I guess I can understand a recommendation of only allowing a
> > single vacuum, however, should it be mandated that AVD will ONLY be able
> > to perform a single vacuum at a time?
> 
> Hmm.. CPU time (from what I've seen) isn't an issue.  Strictly disk. The
> big problem with multiple vacuums is determining which tables are in
> common areas.
> 
> Perhaps a more appropriate rule would be 1 AVD per tablespace?  Since
> PostgreSQL only has a single tablespace at the moment

But tablespace is planned for 7.4 right?  Since tablespace is supposed
to go in for 7.4, I think you've hit the nail on the head.  One AVD per
tablespace sounds just right to me.


-- 
Greg Copeland <[EMAIL PROTECTED]>
Copeland Computer Consulting


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] 7.4 Wishlist

2002-12-10 Thread Greg Copeland
On Tue, 2002-12-10 at 09:36, Stephen L. wrote:
> 6. Compression between client/server interface like in MySQL
> 

Mammoth is supposed to be donating their compression efforts back to
this project, or so I've been told.  I'm not exactly sure of their
time-line as I've slept since my last conversation with them.  The
initial feedback that I've gotten back from them on this subject is that
the compression has been working wonderfully for them with excellent
results.  IIRC, in their last official release, they announced their
compression implementation.  So, I'd think that it would be available
for 7.4 of 7.5 time frame.


-- 
Greg Copeland <[EMAIL PROTECTED]>
Copeland Computer Consulting


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] [INTERFACES] Patch for DBD::Pg pg_relcheck problem

2002-12-10 Thread Ian Barwick
(no followup to [EMAIL PROTECTED], getting a little OT there)
(B
(BOn Tuesday 10 December 2002 16:54, Lee Kindness wrote:
(B> Ian Barwick writes:
(B>  > Something along the lines of
(B>  >   char *PQversion(const PGconn *conn) ?
(B>
(B> Probably:
(B>
(B>  int PQversion(const PGconn *conn)
(B>
(B> would be better, and easier to parse? For example the value returned
(B> for 7.3.1 would be 7003001; for 7.4 7004000; for 101.10.2
(B> 101010002. This allows simple numerical tests...
(B
(BSounds logical - I was evidently thinking in Perl ;-). 
(B
(BFor reference pg_dump currently parses the SELECT version() string
(Binto an integer thus:
(B
(B7.2 70200
(B7.2.1   70201
(B7.3devel70300
(B7.3rc1  70300
(B7.3.1   70301
(B7.3.99  70399
(B7.399.399  110299
(B101.10.2  1011002
(B
(B(and just for fun:
(B"11i Enterprise Edition with Bells and Whistles "
(Breturns -1 ;-)
(B
(Bwhich works with minor release numbers of 99
(Band below.
(B
(BIan Barwick
([EMAIL PROTECTED]
(B
(B
(B---(end of broadcast)---
(BTIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] Let's create a release team

2002-12-10 Thread Lamar Owen
On Tuesday 10 December 2002 00:24, Justin Clift wrote:
> RPM's & SRPM's

>   - Co-ordinate with Lamar to have these ready before the general
> announcement?

As I am merely a volunteer in this, the availability of RPMs is directly 
impacted by my workload.  There are several times during the year that my 
workload goes from being just difficult to absolutely swamping.  These times 
are typically during mid February through early March; late August through 
late September; and November through January.

See, not only am I the 'Chief Engineer' for several radio stations, but I am 
also the 'IT Director' for WGCR, and the 'Network Administrator' for PARI.  
The Chief Engineer duties include generator work, transmitter work, and 
studio work -- and in winter there is alot of the generator/transmitter work 
in the mix.  The 'IT Director' hat includes eradicating virus infections, 
unlicensed software, etc.  This is currently my busiest area, as we try to 
put our entire fundraising system on our intranet (backed by PostgreSQL, of 
course).  While I say 'we,' I really should say 'I,' as I am the entirety of 
the programming team in this project.  Fortunately I have access to an 
interface design consultant and a good web designer.

I was able to get the RPMs out when I did almost entirely due to the ice storm 
that paralyzed the Carolinas last week -- our particular area did not get hit 
hard with ice, but got mostly snow, which then changed to mostly rain later 
in the day.  So we didn't lose power -- and so I was able to get them done, 
since I was unable to travel to work.

Typically, I would try to track the betas and release candidates (like I did 
with previous releases, to varying degrees), and with the 24 hour notice we 
all get on this list I can have a general release RPM ready.  During this 
cycle I found myself excessively swamped by work -- so I was unable to 
generate RPM's until the general release.  For that I apologize.  I cannot 
guarantee that it won't happen again; but I will try to prevent its 
recurrence.

For the 7.0 cycle, during the maintenance releases, I was retained by Great 
Bridge to produce RPMs -- that ensured that I spent time on them for that 
cycle.
-- 
Lamar Owen
WGCR Internet Radio
1 Peter 4:11

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [HACKERS] [INTERFACES] Patch for DBD::Pg pg_relcheck problem

2002-12-10 Thread Lee Kindness
Ian Barwick writes:
 > On Tuesday 10 December 2002 00:47, Tom Lane wrote:
 > > In the next protocol version update (hopefully 7.4) I would like to see
 > > the basic version string (eg, "7.3.1" or "7.4devel") delivered to the
 > > client automatically during connection startup and then available from a
 > > libpq inquiry function.  This would eliminate the need to call version()
 > > explicitly and to know that you must skip "PostgreSQL " in its output.
 > Something along the lines of 
 >   char *PQversion(const PGconn *conn) ?

Probably:

 int PQversion(const PGconn *conn)

would be better, and easier to parse? For example the value returned
for 7.3.1 would be 7003001; for 7.4 7004000; for 101.10.2
101010002. This allows simple numerical tests...

Lee.

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] [INTERFACES] Patch for DBD::Pg pg_relcheck problem

2002-12-10 Thread Tom Lane
Ian Barwick <[EMAIL PROTECTED]> writes:
> Sounds good to me. Is it on the todo-list? (Couldn't see it there).

Probably not; Bruce for some reason has resisted listing protocol change
desires as an identifiable TODO category.  There are a couple of threads
in the pghackers archives over the last year or so that discuss the
different things we want to do, though.  (Improving the error-reporting
framework and fixing the COPY protocol are a couple of biggies I can
recall offhand.)

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [HACKERS] [INTERFACES] Patch for DBD::Pg pg_relcheck problem

2002-12-10 Thread Ian Barwick

(B(crossposting to hackers)
(B
(BOn Tuesday 10 December 2002 00:47, Tom Lane wrote:
(B> In the next protocol version update (hopefully 7.4) I would like to see
(B> the basic version string (eg, "7.3.1" or "7.4devel") delivered to the
(B> client automatically during connection startup and then available from a
(B> libpq inquiry function.  This would eliminate the need to call version()
(B> explicitly and to know that you must skip "PostgreSQL " in its output.
(B
(BSomething along the lines of 
(B  char *PQversion(const PGconn *conn) ?
(B
(B> However, it will only help for clients/libraries that are willing to
(B> deal exclusively with 7.4-or-newer backends, so it will take a few
(B> releases to become really useful.
(B
(BSounds good to me. Is it on the todo-list? (Couldn't see it there).
(B
(BIan Barwick
([EMAIL PROTECTED]
(B
(B
(B---(end of broadcast)---
(BTIP 5: Have you checked our extensive FAQ?
(B
(Bhttp://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] 7.4 Wishlist

2002-12-10 Thread Stephen L.
Hi, if I may add to the wishlist for 7.4 in order of importance. Some items
may have been mentioned or disputed already but I think they are quite
important:

1. Avoid needing REINDEX after large insert/deletes or make REINDEX not use
exclusive lock on table.
2. Automate VACUUM in background and make database more
interactive/responsive during long VACUUMs
3. Replication
4. Point-in-time recovery
5. Maintain automatic clustering (CLUSTER) even after subsequent
insert/updates.
6. Compression between client/server interface like in MySQL

Thanks,

Stephen
jleelim(at)hotmail.com



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] psql's \d commands --- end of the line for

2002-12-10 Thread Hannu Krosing
On Mon, 2002-12-09 at 23:12, Philip Warner wrote:
> At 05:13 PM 9/12/2002 -0500, Tom Lane wrote:
> >Seems like a fine idea to me.
> 
> Ditto.
> 
> >"\D" works though.)
> >
> >Any objections out there?
> 
> My only complaint here is being forced to use the 'shift' key on commands 
> that will be common.

On most european keyboards you alreday have to use "AltGr" to get to \
so using an extra shift is not too bad ;)


-- 
Hannu Krosing <[EMAIL PROTECTED]>

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [HACKERS] Let's create a release team

2002-12-10 Thread Vince Vielhaber
On Tue, 10 Dec 2002, Tom Lane wrote:

> "Dan Langille" <[EMAIL PROTECTED]> writes:
> >> --- for example: Marc owns, runs, and pays for the
> >> postgresql.org servers.
>
> > Is the cvs repo mirrored?
>
> Anyone running cvsup would have a complete copy of the source CVS,
> I believe.  It would be more troubling to reconstruct the mailing list
> archives; I'm not sure that those are mirrored anywhere.  (Marc?)

Archives are mirrored at a number of sites.  There was a time when all
web mirrors also mirrored them but that was split off about a year ago.

Vince.
-- 
 Fast, inexpensive internet service 56k and beyond!  http://www.pop4.net/
   http://www.meanstreamradio.com   http://www.unknown-artists.com
 Internet radio: It's not file sharing, it's just radio.


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] Let's create a release team

2002-12-10 Thread Lee Kindness
Dan Langille writes:
 > On 10 Dec 2002 at 9:34, Tom Lane wrote:
 > > Anyone running cvsup would have a complete copy of the source CVS, I
 > > believe.  It would be more troubling to reconstruct the mailing list
 > > archives; I'm not sure that those are mirrored anywhere
 > Do you mean the repository, or the source.  The repository is the ,v 
 > files  The source isn't.  Most developers would have the source, 
 > but not necessarily the repo.

See:

 http://www.cvsup.org/

It mirrors the repository and some of the PostgreSQL developers use
this...

Lee.

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [HACKERS] Auto Vacuum Daemon (again...)

2002-12-10 Thread Shridhar Daithankar
On 10 Dec 2002 at 9:42, Rod Taylor wrote:

> Perhaps a more appropriate rule would be 1 AVD per tablespace?  Since
> PostgreSQL only has a single tablespace at the moment

Sorry I am talking without doing much of it(Stuck to windows for job) But 
actually when I was talking with Matthew offlist, he mentioned that if properly 
streamlined pgavd_c could be in pg sources. But I have these plans of making 
pgavd a central point of management. i.e. where you can vacuum all your 
machines and all databases on them from one place. Like network management 
console.

I hope to finish things fast but can't commit. Still tied here..

Bye
 Shridhar

--
QOTD:   "It's a cold bowl of chili, when love don't work out."


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] Auto Vacuum Daemon (again...)

2002-12-10 Thread Rod Taylor
> > Not sure what you mean by that, but it sounds like the behaviour of my AVD 
> > (having it block until the vacuum command completes) is fine, and perhaps 
> > preferrable. 
> 
> I can easily imagine larger systems with multiple CPUs and multiple disk
> and card bundles to support multiple databases.  In this case, I have a
> hard time figuring out why you'd not want to allow multiple concurrent
> vacuums.  I guess I can understand a recommendation of only allowing a
> single vacuum, however, should it be mandated that AVD will ONLY be able
> to perform a single vacuum at a time?

Hmm.. CPU time (from what I've seen) isn't an issue.  Strictly disk. The
big problem with multiple vacuums is determining which tables are in
common areas.

Perhaps a more appropriate rule would be 1 AVD per tablespace?  Since
PostgreSQL only has a single tablespace at the moment

-- 
Rod Taylor <[EMAIL PROTECTED]>

PGP Key: http://www.rbt.ca/rbtpub.asc



signature.asc
Description: This is a digitally signed message part


Re: [HACKERS] Let's create a release team

2002-12-10 Thread Dan Langille
On 10 Dec 2002 at 9:34, Tom Lane wrote:

> "Dan Langille" <[EMAIL PROTECTED]> writes:
> >> --- for example: Marc owns, runs, and pays for the
> >> postgresql.org servers.
> 
> > Is the cvs repo mirrored?
> 
> Anyone running cvsup would have a complete copy of the source CVS, I
> believe.  It would be more troubling to reconstruct the mailing list
> archives; I'm not sure that those are mirrored anywhere

Do you mean the repository, or the source.  The repository is the ,v 
files  The source isn't.  Most developers would have the source, 
but not necessarily the repo.
-- 
Dan Langille : http://www.langille.org/


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [HACKERS] Let's create a release team

2002-12-10 Thread Tom Lane
"Dan Langille" <[EMAIL PROTECTED]> writes:
>> --- for example: Marc owns, runs, and pays for the
>> postgresql.org servers.

> Is the cvs repo mirrored?

Anyone running cvsup would have a complete copy of the source CVS,
I believe.  It would be more troubling to reconstruct the mailing list
archives; I'm not sure that those are mirrored anywhere.  (Marc?)

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [HACKERS] Auto Vacuum Daemon (again...)

2002-12-10 Thread Greg Copeland
On Fri, 2002-11-29 at 07:19, Shridhar Daithankar wrote:
> On 29 Nov 2002 at 7:59, Matthew T. O'Connor wrote:
> 
> > On Thursday 28 November 2002 23:26, Shridhar Daithankar wrote:
> > > On 28 Nov 2002 at 10:45, Tom Lane wrote:
> > > > This is almost certainly a bad idea.  vacuum is not very
> > > > processor-intensive, but it is disk-intensive.  Multiple vacuums running
> > > > at once will suck more disk bandwidth than is appropriate for a
> > > > "background" operation, no matter how sexy your CPU is.  I can't see
> > > > any reason to allow more than one auto-scheduled vacuum at a time.
> > > Hmm.. We would need to take care of that as well..
> > Not sure what you mean by that, but it sounds like the behaviour of my AVD 
> > (having it block until the vacuum command completes) is fine, and perhaps 
> > preferrable. 
> 
> Right.. But I will still keep option open for parallel vacuum which is most 
> useful for reusing tuples in shared buffers.. And stale updated tuples are what 
> causes performance drop in my experience..
> 
> You know.. just enough rope to hang themselves..;-)
> 

Right.  This is exactly what I was thinking about.  If someone shoots
their own foot off, that's their problem.  The added flexibility seems
well worth it.

Greg



---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] Auto Vacuum Daemon (again...)

2002-12-10 Thread Greg Copeland
On Fri, 2002-11-29 at 06:59, Matthew T. O'Connor wrote:
> On Thursday 28 November 2002 23:26, Shridhar Daithankar wrote:
> > On 28 Nov 2002 at 10:45, Tom Lane wrote:
> > > "Matthew T. O'Connor" <[EMAIL PROTECTED]> writes:
> > > > interesting thought.  I think this boils down to how many knobs do we
> > > > need to put on this system. It might make sense to say allow upto X
> > > > concurrent vacuums, a 4 processor system might handle 4 concurrent
> > > > vacuums very well.
> > >
> > > This is almost certainly a bad idea.  vacuum is not very
> > > processor-intensive, but it is disk-intensive.  Multiple vacuums running
> > > at once will suck more disk bandwidth than is appropriate for a
> > > "background" operation, no matter how sexy your CPU is.  I can't see
> > > any reason to allow more than one auto-scheduled vacuum at a time.
> >
> > Hmm.. We would need to take care of that as well..
> 
> Not sure what you mean by that, but it sounds like the behaviour of my AVD 
> (having it block until the vacuum command completes) is fine, and perhaps 
> preferrable. 


I can easily imagine larger systems with multiple CPUs and multiple disk
and card bundles to support multiple databases.  In this case, I have a
hard time figuring out why you'd not want to allow multiple concurrent
vacuums.  I guess I can understand a recommendation of only allowing a
single vacuum, however, should it be mandated that AVD will ONLY be able
to perform a single vacuum at a time?


Greg



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] Let's create a release team

2002-12-10 Thread Dan Langille
On 10 Dec 2002 at 0:56, Tom Lane wrote:

> "Dan Langille" <[EMAIL PROTECTED]> writes:
> > Is the process documented?  Any set procedure?  Who knows how to do
> > it?
> 
> Er ... nope, nope, the core bunch ...

Sounds like we need to do a brain dump then.  I just happen to have 
some equipment left over from "The Matrix"

> > If these things are not documented, they should be.
> 
> Most of the undocumented details of the release process are in the
> heads of Marc Fournier and Bruce Momjian.  If either of them falls off
> the end of the earth, we have worse troubles than whether we remember
> how to do a release

On a project, anyone is replaceable.  And anyone might leave for any 
number of reasons.  If they do, the affect upon the project will be 
minimized by having the major processes documented.

> --- for example: Marc owns, runs, and pays for the
> postgresql.org servers.

Is the cvs repo mirrored?

> (Me, I just hack code, so I'm replaceable.)

Yeah, yeah, stop being humble... ;)

> But if you want to try to document the process better, there are some
> details written down already (eg, src/tools/RELEASE_CHANGES) and I'm
> sure Marc and Bruce would cooperate in writing down more.

That's a good start. It looks like a list of things easily forgotten 
but if forgotten, make us look bad.
-- 
Dan Langille : http://www.langille.org/


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])