Re: [HACKERS] New SQL Datatype RECURRINGCHAR

2001-07-07 Thread Tom Lane

"Rod Taylor" <[EMAIL PROTECTED]> writes:
> Anyway, the point is that some of the simple views should be straight
> forward to reversing automatically if someone has the will and the
> time it can be done.  A while back a list of 'views which cannot be
> reversed' was created and included things such as Unions,
> Intersections, exclusions, aggregates, CASE statements, and a few more
> items.

SQL92 has a notion that certain simple views are "updatable", while the
rest are not.  In our terms this means that we should automatically
create ON INSERT/UPDATE/DELETE rules if the view is updatable according
to the spec.  I have not bothered to chase down all the exact details
of the spec's "updatableness" restrictions, but they're along the same
lines you mention --- only one referenced table, no aggregation, no
set operations, all view outputs are simple column references, etc.

My feeling is that the restrictions are stringent enough to eliminate
most of the interesting uses of views, and hence an automatic rule
creation feature is not nearly as useful/important as it appears at
first glance.  In real-world applications you'll have to expend some
thought on manual rule creation anyway.

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [HACKERS] New SQL Datatype RECURRINGCHAR

2001-07-07 Thread Rod Taylor

> > > This would be a potential feature of being able to insert into
> views
> > > in general.  Reversing the CREATE VIEW statement to accept
> inserts,
> > > deletes and updates.
> > Definitely not a 'potential' feature, but a existing and
documented
> one.
> > Read up on rules, especially 'ON INSERT DO INSTEAD' stuff. Its not
> > automatic, though.
>
> Trust me, I know how to go about doing those kinds of things
manually.
> I was referring to the automated reveral -- which creates this
> features in a very simple manner without any additions or changes to
> system tables -- aside from reverse rules themselves which is a more
> generic feature.

Hmm. My above statement lost all credibility in poor grammer and
speeling.  Time for bed I suppose.

Anyway, the point is that some of the simple views should be straight
forward to reversing automatically if someone has the will and the
time it can be done.  A while back a list of 'views which cannot be
reversed' was created and included things such as Unions,
Intersections, exclusions, aggregates, CASE statements, and a few more
items.


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] New SQL Datatype RECURRINGCHAR

2001-07-07 Thread Rod Taylor

> > This would be a potential feature of being able to insert into
views
> > in general.  Reversing the CREATE VIEW statement to accept
inserts,
> > deletes and updates.
> Definitely not a 'potential' feature, but a existing and documented
one.
> Read up on rules, especially 'ON INSERT DO INSTEAD' stuff. Its not
> automatic, though.

Trust me, I know how to go about doing those kinds of things manually.
I was referring to the automated reveral -- which creates this
features in a very simple manner without any additions or changes to
system tables -- aside from reverse rules themselves which is a more
generic feature.

> > If true, focus on that.  Theres lots of views that cannot be
reversed
> > properly -- unions come to mind -- but perhaps this type of simple
> > join could be a first step in the package.  I believe this is on
the
> > TODO list already.
> On TODO list are updatable views in SQL sense of word, [i.e.
automatic
> updateability of a view which matches certain criteria].
>
> > Different attack, but accomplishes the same thing within SQL
standards
> > as I seem to recall views are supposed to do this where
reasonable.
> >
> >
> > Failing that, implement this type of action the same way as
foreign
> > keys.  Via the described method with automagically created views,
> > tables, etc.  Though I suggest leaving it in contrib for sometime.
> > Enum functionality isn't particularly useful to the majority whose
> > applications tend to pull out the numbers for states when the
> > application is opened (with the assumption they're generally
static).
>
> Original suggestion was not for an enum type, it was for
_dynamically
> extensible_ data dictionary type.

ENUMs from my memory are easily redefined.  And since the database
they're implemented in requires table locks for everything, they can
appear dynamic (nothing is transaction safe in that thing anyway).


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] Async PQgetResult() question.

2001-07-07 Thread Tom Lane

Matthew Hagerty <[EMAIL PROTECTED]> writes:
> So then how would I code for the exception, i.e. the backend goes down just 
> before or during my call to PQsendQuery()?  If I am non-blocking then I can 
> determine that my query did not go (PQsendQuery() or PQflush() returns an 
> error) and attempt to recover.

This is the nasty part of any async client, all right.  The case of a
backend crash doesn't bother me particularly: in the first place, you'll
get back a "connection closed" failure quickly, and in the second place,
backend crashes while absorbing query text (as opposed to while
executing a query) are just about unheard of.  However, the possibility
of loss of network connectivity is much more dire: it's plausible, and
in most cases you're looking at a very long timeout before the kernel
will decide that the connection is toast and report an error to you.

I'm unconvinced, however, that using PQsetnonblocking improves the
picture very much.  Unless the database operations are completely
noncritical to what your app is doing, you're going to be pretty
much dead in the water anyway with a lost connection :-(

In the end you pays your money and you takes your choice.  I do
recommend reading my past rants about why PQsetnonblocking is broken
(circa Jan 2000, IIRC) before you put any faith in it.  If you end
up deciding that it really is something you gotta have, maybe you'll
be the one to do the legwork to make it reliable.

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [HACKERS] selecting from cursor

2001-07-07 Thread Alex Pilosov

On Mon, 2 Jul 2001, Alex Pilosov wrote:

> Erm, forgot to attach the patch. Here it is.
(yow) don't even bother looking at this patch. mail server delayed this
message by almost a week, and by now, the code is totally changed.

I took Tom's suggestion and made RTE a union. So, the below is a new
definition of RTE:

I have most of portal-related code working, only executor needs some more
fixes. Code properly makes PortalScan Path entry, PortalScan Plan nodes,
etc. I have added PortalReScan to tell portal it needs to rescan itself. 

I'll post a correct patch next week. Thank you to everyone and especially
Tom for bearing with my often stupid questions.

--cut here--rte definition--
typedef enum RTEType {
RTE_RELATION,
RTE_SUBSELECT,
RTE_PORTAL
} RTEType;

typedef struct RangeTblEntry
{
NodeTag type;
RTEType rtetype;
/*
 * Fields valid in all RTEs:
 */
Attr   *alias;  /* user-written alias clause, if any */
Attr   *eref;   /* expanded reference names */
boolinh;/* inheritance requested? */
boolinFromCl;   /* present in FROM clause */
boolcheckForRead;   /* check rel for read access */
boolcheckForWrite;  /* check rel for write access */
Oid checkAsUser;/* if not zero, check access as this user
*/
   
union {
struct  {
/* Fields for a plain relation RTE (rtetype=RTE_RELATION) */
char   *relname;/* real name of the relation */
Oid relid;  /* OID of the relation */
} rel;
struct {
/* Fields for a subquery RTE (rtetype=RTE_SUBSELECT) */
Query  *subquery;   /* the sub-query */
} sub;
struct {
/* fields for portal RTE (rtetype=RTE_PORTAL) */
char  *portalname;/* portal's name */
} portal;
} u;
} RangeTblEntry;



---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



Re: [HACKERS] New SQL Datatype RECURRINGCHAR

2001-07-07 Thread Alex Pilosov


On Sat, 7 Jul 2001, Rod Taylor wrote:

> This would be a potential feature of being able to insert into views
> in general.  Reversing the CREATE VIEW statement to accept inserts,
> deletes and updates.
Definitely not a 'potential' feature, but a existing and documented one.
Read up on rules, especially 'ON INSERT DO INSTEAD' stuff. Its not
automatic, though.

> If true, focus on that.  Theres lots of views that cannot be reversed
> properly -- unions come to mind -- but perhaps this type of simple
> join could be a first step in the package.  I believe this is on the
> TODO list already.
On TODO list are updatable views in SQL sense of word, [i.e. automatic
updateability of a view which matches certain criteria].

> Different attack, but accomplishes the same thing within SQL standards
> as I seem to recall views are supposed to do this where reasonable.
> 
> 
> Failing that, implement this type of action the same way as foreign
> keys.  Via the described method with automagically created views,
> tables, etc.  Though I suggest leaving it in contrib for sometime.
> Enum functionality isn't particularly useful to the majority whose
> applications tend to pull out the numbers for states when the
> application is opened (with the assumption they're generally static).

Original suggestion was not for an enum type, it was for _dynamically
extensible_ data dictionary type. 

ENUM is statically defined, and it wouldn't be too hard to implement, with
one exception: one more type-specific field needs to be added to
pg_attribute table, where would be stored argument for the type (such as,
length for a char/varchar types, length/precision for numeric type, and
possible values for a enum type). 

This just needs a pronouncement that this addition is a good idea, and
then its a trivial thing to implement enum.

-alex


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [HACKERS] New SQL Datatype RECURRINGCHAR

2001-07-07 Thread Rod Taylor

This would be a potential feature of being able to insert into views
in general.  Reversing the CREATE VIEW statement to accept inserts,
deletes and updates.

If true, focus on that.  Theres lots of views that cannot be reversed
properly -- unions come to mind -- but perhaps this type of simple
join could be a first step in the package.  I believe this is on the
TODO list already.

Different attack, but accomplishes the same thing within SQL standards
as I seem to recall views are supposed to do this where reasonable.


Failing that, implement this type of action the same way as foreign
keys.  Via the described method with automagically created views,
tables, etc.  Though I suggest leaving it in contrib for sometime.
Enum functionality isn't particularly useful to the majority whose
applications tend to pull out the numbers for states when the
application is opened (with the assumption they're generally static).

--
Rod Taylor

Your eyes are weary from staring at the CRT. You feel sleepy. Notice
how restful it is to watch the cursor blink. Close your eyes. The
opinions stated above are yours. You cannot imagine why you ever felt
otherwise.

- Original Message -
From: "Alex Pilosov" <[EMAIL PROTECTED]>
To: "David Bennett" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Saturday, July 07, 2001 9:24 PM
Subject: RE: [HACKERS] New SQL Datatype RECURRINGCHAR


> On Sat, 7 Jul 2001, David Bennett wrote:
>
> > -
> > In a nutshell you are recommending:
> > -
> >
> >   create table contact_type (
> > code   int2,
> > typechar(16),
> > PRIMARY KEY ( code )
> >   );
> >
> >   create table contact (
> > number  serial,
> > namechar(32),
> > type  int2,
> > PRIMARY KEY ( number ),
> > FOREIGN KEY ( type ) REFERENCES contact_type ( code )
> >   );
> >
> >   create view contact_with_readble_type as (
> > select c.number as number,
> >c.name as name,
> >t.type as type
> > from
> >contact c,
> >contact_type t
> >   );
> >
> > * To build a type lookup table:
> >
> >   1) Select type and code from contact_type
> >   2) Build UI object which displays type and returns code
> Just 'select distinct' on a view should do just fine.
>
> > * In order to insert a new record with this model:
> >
> >   1) Look up to see if type exists
> >   2) Insert new type
> >   3) Get type ID
> >   4) Insert contact record
> This can be encapsulated with "ON INSERT" rule on a view.
>
> > * The adhoc query user is now faced with
> >   the task of understanding 3 data tables.
> No, only one view. All the logic is encapsulated there.
>
> >
> > -
> > With recurringchar you could do this easily as:
> > -
> >
> >   create table contact (
> > number  serial,
> > namechar(32),
> > type  recurringchar1,
> > PRIMARY KEY ( number ),
> >   );
> >
> > * To build a type lookup table:
> >
> >   1) Select distinct type from contact (optimized access to
recurringchar
> > dictionary)
> >   2) Build UI object which displays and returns type.
> >
> > * In order to insert a new record with this model:
> >
> >   1) Insert contact record
> >
> > * The adhoc query user has one data table.
> >
> > -
> >
> > Granted, changing the value of contact_type.type would require
edits
> > to the contact records. It may be possible to add simple syntax to
> > allow editing of a 'recurringchar dictionary' to get around
isolated
> > problem which would only exist in certain applications.
> >
> > Actually, maybe 'dictionary' or 'dictref' would be a better name
for
> > the datatype.
> These things belong in application or middleware (AKA
views/triggers), not
> in database server itself.
>
> There are multiple problems with your implementation, for example,
> transaction handling, assume this situation:
>
> Tran A inserts a new contact with new type "foo", but does not
commit.
> Dictionary assigns value of N to 'foo'.
>
> Tran B inserts a new contact with type foo. What value should be
entered
> in the dictionary? N? A new value?
>
> If a type disappears from database, does its dictionary ID get
reused?
>
> All these questions are not simple questions, and its not up to
database
> to decide it. Your preferred solution belongs in your
triggers/views, not
> in core database.
>
>
>
> ---(end of
broadcast)---
> TIP 3: if posting/reading through Usenet, please send an appropriate
> subscribe-nomail command to [EMAIL PROTECTED] so that your
> message can get through to the mailing list cleanly
>


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



RE: [HACKERS] New SQL Datatype RECURRINGCHAR

2001-07-07 Thread Alex Pilosov

On Sat, 7 Jul 2001, David Bennett wrote:

> -
> In a nutshell you are recommending:
> -
> 
>   create table contact_type (
> codeint2,
> typechar(16),
> PRIMARY KEY ( code )
>   );
> 
>   create table contact (
> numberserial,
> name  char(32),
> type  int2,
> PRIMARY KEY ( number ),
> FOREIGN KEY ( type ) REFERENCES contact_type ( code )
>   );
> 
>   create view contact_with_readble_type as (
> select c.number as number,
>c.name as name,
>t.type as type
> from
>contact c,
>contact_type t
>   );
> 
> * To build a type lookup table:
> 
>   1) Select type and code from contact_type
>   2) Build UI object which displays type and returns code
Just 'select distinct' on a view should do just fine. 

> * In order to insert a new record with this model:
> 
>   1) Look up to see if type exists
>   2) Insert new type
>   3) Get type ID
>   4) Insert contact record
This can be encapsulated with "ON INSERT" rule on a view.

> * The adhoc query user is now faced with
>   the task of understanding 3 data tables.
No, only one view. All the logic is encapsulated there.

> 
> -
> With recurringchar you could do this easily as:
> -
> 
>   create table contact (
> numberserial,
> name  char(32),
> type  recurringchar1,
> PRIMARY KEY ( number ),
>   );
> 
> * To build a type lookup table:
> 
>   1) Select distinct type from contact (optimized access to recurringchar
> dictionary)
>   2) Build UI object which displays and returns type.
> 
> * In order to insert a new record with this model:
> 
>   1) Insert contact record
> 
> * The adhoc query user has one data table.
> 
> -
> 
> Granted, changing the value of contact_type.type would require edits
> to the contact records. It may be possible to add simple syntax to
> allow editing of a 'recurringchar dictionary' to get around isolated
> problem which would only exist in certain applications.
> 
> Actually, maybe 'dictionary' or 'dictref' would be a better name for
> the datatype.
These things belong in application or middleware (AKA views/triggers), not
in database server itself.

There are multiple problems with your implementation, for example,
transaction handling, assume this situation:

Tran A inserts a new contact with new type "foo", but does not commit.
Dictionary assigns value of N to 'foo'.

Tran B inserts a new contact with type foo. What value should be entered
in the dictionary? N? A new value? 

If a type disappears from database, does its dictionary ID get reused?

All these questions are not simple questions, and its not up to database
to decide it. Your preferred solution belongs in your triggers/views, not
in core database.



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] Async PQgetResult() question.

2001-07-07 Thread Matthew Hagerty

At 03:46 PM 7/7/2001 -0400, Tom Lane wrote:
>Matthew Hagerty <[EMAIL PROTECTED]> writes:
> > If I don't call PQsetnonblocking() will that affect any of the async
> > functions I'm dealing with?
>
>PQsetnonblocking has nothing to do with the
>PQconsumeInput/PQisBusy/PQgetResult family of functions.  The point of
>the latter is to avoid blocking while waiting for input from the
>database.  The point of PQsetnonblocking is to avoid blocking while
>sending stuff to the database.
>
>Now in a TCP environment, the only way send() is going to block is if
>you send more stuff than there's currently room for in your kernel's
>networking buffers --- which typically are going to be hundreds of K.
>I could see needing PQsetnonblocking if you need to avoid blocking
>while transmitting COPY IN data to the database ... but for queries
>it's harder to credit.  Also, unless you are sending more than one
>query in a query string, the backend is going to be absorbing the
>data as fast as it can anyway; so even if you do block it's only
>going to be for a network transit delay, not for database processing.
>
>Personally I've done quite a bit of asynchronous-application coding with
>PQconsumeInput &friends, but never felt the need for PQsetnonblocking.
>This is why I've not been motivated to try to fix its problems...

So then how would I code for the exception, i.e. the backend goes down just 
before or during my call to PQsendQuery()?  If I am non-blocking then I can 
determine that my query did not go (PQsendQuery() or PQflush() returns an 
error) and attempt to recover.  Otherwise, my server could block until 
PQsendQuery() times out and returns an error.  I guess it would depend on 
how long PQsendQuery() waits to return if there is an error or problem with 
the backend or the connection to the backend?

Matthew


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] Async PQgetResult() question.

2001-07-07 Thread Tom Lane

I said:
> Also, unless you are sending more than one
> query in a query string, the backend is going to be absorbing the
> data as fast as it can anyway; so even if you do block it's only
> going to be for a network transit delay, not for database processing.

Actually, forget the "unless" part -- the backend won't start parsing
the querystring until it's got it all.  It just reads the query into
memory as fast as it can, semicolons or no.

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] Async PQgetResult() question.

2001-07-07 Thread Tom Lane

Matthew Hagerty <[EMAIL PROTECTED]> writes:
> If I don't call PQsetnonblocking() will that affect any of the async 
> functions I'm dealing with?

PQsetnonblocking has nothing to do with the
PQconsumeInput/PQisBusy/PQgetResult family of functions.  The point of
the latter is to avoid blocking while waiting for input from the
database.  The point of PQsetnonblocking is to avoid blocking while
sending stuff to the database.

Now in a TCP environment, the only way send() is going to block is if
you send more stuff than there's currently room for in your kernel's
networking buffers --- which typically are going to be hundreds of K.
I could see needing PQsetnonblocking if you need to avoid blocking
while transmitting COPY IN data to the database ... but for queries
it's harder to credit.  Also, unless you are sending more than one
query in a query string, the backend is going to be absorbing the
data as fast as it can anyway; so even if you do block it's only
going to be for a network transit delay, not for database processing.

Personally I've done quite a bit of asynchronous-application coding with
PQconsumeInput &friends, but never felt the need for PQsetnonblocking.
This is why I've not been motivated to try to fix its problems...

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



Re: [HACKERS] Re: [GENERAL] Vacuum and Transactions

2001-07-07 Thread Tom Lane

Lincoln Yeoh <[EMAIL PROTECTED]> writes:
> Would 7.2 maintain performance when updating a row repeatedly (update,
> commit)?

You'll still need to VACUUM to get rid of the obsoleted versions of the
row.  The point of the planned 7.2 changes is to make VACUUM cheap and
nonintrusive enough so that you can run it frequently on tables that are
seeing continual updates.

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] Async PQgetResult() question.

2001-07-07 Thread Matthew Hagerty

At 02:13 PM 7/7/2001 -0400, Tom Lane wrote:
>Matthew Hagerty <[EMAIL PROTECTED]> writes:
> > Only applications that have used PQsetnonblocking have a need for this."
>
> > Since I use PQsetnonblocking(), I included PQflush().
>
>Hmm.  My opinions about the PQsetnonblocking patch are on record:
>it's broken and needs fundamental redesign before it has any chance
>of operating reliably.  Unless you are sending queries whose text is
>many kB (more than your kernel will buffer for one send() call),
>I recommend you not use it.
>
>However, that only affects output, not input.

If I don't call PQsetnonblocking() will that affect any of the async 
functions I'm dealing with?  I might have insert queries that are rather 
large and I'm not sure how big my kernel's buffers are (and surely it will 
be different on other OSes.)



> > I wrote a small test program and at this point (after
> > reading the first result) I looped back to my select, but the socket never
> > went read-ready again, so the last
> > PQconsumeInput()/PQisBusy()/PQgetResults() was never called to receive the
> > NULL response from PQgetResult(), which is how the docs say I know the
> > query is done.
>
> > But if I loop back to PQconsumeInput()/PQisBusy(), then I am effectively
> > blocking since I have no way to know that PQconsumeInput() won't block or
> > that the PQisBusy() will ever return zero again.
>
>(1) No, you don't need to repeat the PQconsumeInput, unless select still
>says read-ready.  (You could call it again, but there's no point.)
>
>(2) You should repeat PQisBusy && PQgetResult until one of them fails,
>however.  What you're missing here is that a single TCP packet might
>provide zero, one, or more than one PQgetResult result.  You want to
>loop until you've gotten all the results you can get from the current
>input packet.  Then you go back to select(), and eventually you'll see
>more backend input and you do another consumeInput and another isBusy/
>getResult loop.

Yup, I think that is what I was misunderstanding.  I'll modify my loop and 
see how it goes.


>(3) PQconsumeInput never blocks.  Period.  PQgetResult can block, but
>it promises not to if an immediately prior PQisBusy returned 0.
>
> regards, tom lane

Thanks,
Matthew


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] Async PQgetResult() question.

2001-07-07 Thread Tom Lane

Matthew Hagerty <[EMAIL PROTECTED]> writes:
> Only applications that have used PQsetnonblocking have a need for this."

> Since I use PQsetnonblocking(), I included PQflush().

Hmm.  My opinions about the PQsetnonblocking patch are on record:
it's broken and needs fundamental redesign before it has any chance
of operating reliably.  Unless you are sending queries whose text is
many kB (more than your kernel will buffer for one send() call),
I recommend you not use it.

However, that only affects output, not input.

> I wrote a small test program and at this point (after 
> reading the first result) I looped back to my select, but the socket never 
> went read-ready again, so the last 
> PQconsumeInput()/PQisBusy()/PQgetResults() was never called to receive the 
> NULL response from PQgetResult(), which is how the docs say I know the 
> query is done.

> But if I loop back to PQconsumeInput()/PQisBusy(), then I am effectively 
> blocking since I have no way to know that PQconsumeInput() won't block or 
> that the PQisBusy() will ever return zero again.

(1) No, you don't need to repeat the PQconsumeInput, unless select still
says read-ready.  (You could call it again, but there's no point.)

(2) You should repeat PQisBusy && PQgetResult until one of them fails,
however.  What you're missing here is that a single TCP packet might
provide zero, one, or more than one PQgetResult result.  You want to
loop until you've gotten all the results you can get from the current
input packet.  Then you go back to select(), and eventually you'll see
more backend input and you do another consumeInput and another isBusy/
getResult loop.

(3) PQconsumeInput never blocks.  Period.  PQgetResult can block, but
it promises not to if an immediately prior PQisBusy returned 0.

regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] Async PQgetResult() question.

2001-07-07 Thread Matthew Hagerty

Thanks for the response Tom!  Does anyone ever tell you how much you are 
appreciated?  If not, I will.  When I post to pgHackers I know I will get a 
response (usually) from many knowledgeable people, and for that I thank 
everyone.  But I *always* receive a response from Tom, and for that I am 
truly and greatly thankful!

Now, on with asynchronous query processing!  WooHoo! :)

At 12:35 PM 7/7/2001 -0400, Tom Lane wrote:
>Matthew Hagerty <[EMAIL PROTECTED]> writes:
> > I'm working with pqlib in asynchronous mode and I have a question about
> > PQgetResult.  I have this situation:
>
> > submit a query via PQsendQuery()
> > flush to the backend with PQflush()
>
>I think the flush is not necessary; PQsendQuery should do it.


The docs warn that PQflush() must be called, here is what is says:

"PQflush needs to be called on a non-blocking connection before calling 
select to determine if a response has arrived. If 0 is returned it ensures 
that there is no data queued to the backend that has not actually been 
sent. Only applications that have used PQsetnonblocking have a need for this."

Since I use PQsetnonblocking(), I included PQflush().



> > set my read descriptor on the socket and enter a select()
>
> > select returns read_ready on the socket, so call PGconsumeInput()
> > PQisBusy() returns zero, so call PQgetResult()
> > PQgetResult() returns a pointer so do whatever with the result
> > call PQclear() on the result
>
>So far so good.
>
> > Now what do I do?
>
>Loop back to the PQisBusy() step.  In practice, you are probably doing
>this in an event-driven application, and the select() is the place in
>the outer loop that blocks for an event.  PQconsumeInput followed by
>a PQisBusy/PQgetResult/process result loop are just your response
>subroutine for an input-ready-on-the-database-connection event.

This is my primary concern.  I'm actually writing a server and I have other 
events going on.  I wrote a small test program and at this point (after 
reading the first result) I looped back to my select, but the socket never 
went read-ready again, so the last 
PQconsumeInput()/PQisBusy()/PQgetResults() was never called to receive the 
NULL response from PQgetResult(), which is how the docs say I know the 
query is done.

But if I loop back to PQconsumeInput()/PQisBusy(), then I am effectively 
blocking since I have no way to know that PQconsumeInput() won't block or 
that the PQisBusy() will ever return zero again.

If there is more input to read, i.e. the NULL from PQgetResult(), then how 
come the socket never goes read-ready again, as in my test program 
above?  Or can I call PQconsumeInput() repeatedly and be guaranteed that it 
will not block and that I will be able to process the remaining results?

> > The docs say that in async mode that PQgetResult() must
> > be called until it returns NULL.  But, how do I know that calling
> > PQgetResult() a second, third, fourth, etc. time will not block?
>
>When PQisBusy says you can.
>
> > When
> > PQisBusy() returns zero, does that mean that PQgetResult() is guaranteed
> > not to block for all results, i.e. until it returns NULL and a new 
> query is
> > issued?
>
>No, it means *one* result is available (or that a NULL is available, ie,
>libpq has detected the end of the query cycle).  Its answer will
>probably change after you read the result.

My main problem is that sockets are slow devices and I have some other disk 
I/O I have to be doing as well.  I know that another call to PQgetResult() 
will probably return NULL, but if that data has not come from the backend 
yet then PQgetResult() will block, and I can't sit in a loop calling 
PQconsumeInput()/PQisBusy() becuse that is just like blocking and I may as 
well just let PQgetResult() block.

It seems that when the query cycle has ended and PQgetResult() would return 
NULL, the socket should be set read-ready again, but that never 
happens...  Well, at least I did not see it happen.  I can send my test 
code if need be.

Thanks,
Matthew

> regards, tom lane
>
>---(end of broadcast)---
>TIP 6: Have you searched our list archives?
>
>http://www.postgresql.org/search.mpl


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] Async PQgetResult() question.

2001-07-07 Thread Tom Lane

Matthew Hagerty <[EMAIL PROTECTED]> writes:
> I'm working with pqlib in asynchronous mode and I have a question about 
> PQgetResult.  I have this situation:

> submit a query via PQsendQuery()
> flush to the backend with PQflush()

I think the flush is not necessary; PQsendQuery should do it.

> set my read descriptor on the socket and enter a select()

> select returns read_ready on the socket, so call PGconsumeInput()
> PQisBusy() returns zero, so call PQgetResult()
> PQgetResult() returns a pointer so do whatever with the result
> call PQclear() on the result

So far so good.

> Now what do I do?

Loop back to the PQisBusy() step.  In practice, you are probably doing
this in an event-driven application, and the select() is the place in
the outer loop that blocks for an event.  PQconsumeInput followed by
a PQisBusy/PQgetResult/process result loop are just your response
subroutine for an input-ready-on-the-database-connection event.

> The docs say that in async mode that PQgetResult() must 
> be called until it returns NULL.  But, how do I know that calling 
> PQgetResult() a second, third, fourth, etc. time will not block?

When PQisBusy says you can.

> When 
> PQisBusy() returns zero, does that mean that PQgetResult() is guaranteed 
> not to block for all results, i.e. until it returns NULL and a new query is 
> issued?

No, it means *one* result is available (or that a NULL is available, ie,
libpq has detected the end of the query cycle).  Its answer will
probably change after you read the result.

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



Re: [HACKERS] 2 gig file size limit

2001-07-07 Thread Lamar Owen

On Friday 06 July 2001 18:51, Naomi Walker wrote:
> If PostgreSQL is run on a system that has a file size limit (2 gig?), where
> might cause us to hit the limit?

Since PostgreSQL automatically segments its internal data files to get around 
such limits, the only place you will hit this limit will be when making 
backups using pg_dump or pg_dumpall.  You may need to pipe the output of 
those commands into a file splitting utility, and then you'll have to pipe 
through a reassembly utility to restore.
--
Lamar Owen
WGCR Internet Radio
1 Peter 4:11

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] Pg on SMP half-powered

2001-07-07 Thread Lamar Owen

On Thursday 05 July 2001 10:51, VĂ­ctor Romero wrote:
>  I am running postgresql 7.1 on a SMP Linux box. It runs, but it never pass
> a loadavg of 0.4, no matter how I try to overload the system.

>  The same configuration, the same executable, the same test on a non-SMP
> machine gives a loadavg of 19.

Sounds like a kernel issue.

However, the load average numbers alone are not enough information to get a 
benchmark.  You need to benchmark using a benchmark that can generate enough 
traffic to load both machines and get good time results for the run of the 
standard benchmark queries.
--
Lamar Owen
WGCR Internet Radio
1 Peter 4:11

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[HACKERS] Re: [GENERAL] Vacuum and Transactions

2001-07-07 Thread Lincoln Yeoh

At 05:59 PM 7/6/01 -0400, Bruce Momjian wrote:
>
>OK, I just talked to Tom on the phone and here is his idea for 7.2.  He
>says he already posted this, but I missed it.
>
>His idea is that in 7.2 VACUUM will only move rows within pages.  It
>will also store unused space locations into shared memory to be used by
>backends needing to add rows to tables.  Actual disk space compaction
>will be performed by new a VACUUM FULL(?) command.
>
>The default VACUUM will not lock the table but only prevent the table
>from being dropped.

Would 7.2 maintain performance when updating a row repeatedly (update,
commit)? Right now performance goes down in a somewhat 1/x manner. It's
still performs ok but it's nice to have things stay blazingly fast.

If not will the new vacuum restore the performance? 

Or will we have to use the VACUUM FULL?

Thanks,
Link.


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly