Re: [HACKERS] Options for growth

2003-01-22 Thread Sean Chittenden
> > That would depend on the OS. Not many 'pc-based unix' support over
> > 4 GB of memory, some don't even go that far.
> 
> > By the way, I too wonder which supported OS platform would support
> > over 4GB of memory on a PC..
> 
> Linux? I don't think there's any problem handling more than 4G
> memory in the system. On 32bit architectures, there's of course the
> 3G (I think) per process limit, but as postgres uses multiprocess
> and not multithreading, this issue doesn't hit so soon. Of course,
> if the per process memory is the problem, you'd have to go to 64bit.

Heh, don't kid yourself.  x86 can only handle 4GB of memory
addressing.  The hack that Linux uses is to swap out 2GB sections of
RAM to a 4GB+ memory range, then copy the memory range it needs down
into usable memory space.  Can we say large page tables?  :)

You need an actual 64bit CPU to access more than 4GB of RAM without
paying for it through the nose.  -sc

-- 
Sean Chittenden



msg27765/pgp0.pgp
Description: PGP signature


Re: [HACKERS] Windows Build System

2003-01-22 Thread am
On Wed, Jan 22, 2003 at 02:55:34PM -0400, Curtis Faith wrote:
> 
> Hannu Krosing also wrote:
> > (also I dont think you can easily compile C source on a
> > C# compiler) ;/
> 
> I don't think it makes much sense target a compiler that won't compile
> the source, therefore, if what you say is true, we shouldn't bother with
> targeting C#.

I don't think that compiling on a C# compiler is possible at all.
C# is too different from C. In fact, the C# language is much more similar 
to Java than C/C++ ( there are no pointers, for example ).


-- 
Adrian Maier
([EMAIL PROTECTED], 40 723.002.097)

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] C++ coding assistance request for a visualisation tool

2003-01-22 Thread Justin Clift
Justin Clift wrote:

Greg Copeland wrote:


Have you tried IBM's OSS visualization package yet?  Sorry, I don't seem
to recall the name of the tool off the top of my head (Data Explorer??)
but it uses OpenGL (IIRC) and is said to be able to visualize just about
anything.  Anything is said to include simple data over time to complex
medical CT scans.



Cool.

Just found it...  IBM "Open Visualization Data Explorer":

http://www.research.ibm.com/dx/


That seems to be a very outdated page for it.  The new pages for it (in 
case anyone else is interested) are at:

http://www.opendx.org

:-)

Regards and best wishes,

Justin Clift


--
"My grandfather once told me that there are two kinds of people: those
who work and those who take the credit. He told me to try to be in the
first group; there was less competition there."
- Indira Gandhi


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [HACKERS] C++ coding assistance request for a visualisation tool

2003-01-22 Thread Justin Clift
Greg Copeland wrote:

Have you tried IBM's OSS visualization package yet?  Sorry, I don't seem
to recall the name of the tool off the top of my head (Data Explorer??)
but it uses OpenGL (IIRC) and is said to be able to visualize just about
anything.  Anything is said to include simple data over time to complex
medical CT scans.


Cool.

Just found it...  IBM "Open Visualization Data Explorer":

http://www.research.ibm.com/dx/

Going to check it out now.  The screenshot looks *very* nice.

;-)

Regards and best wishes,

Justin Clift



Greg


On Wed, 2003-01-22 at 12:19, Justin Clift wrote:


Hi guys,

Is there anyone here that's good with C++ and has a little bit of time
to add PostgreSQL support to a project?

There is a 4D visualisation program called Flounder:

http://www.enel.ucalgary.ca/~vigmond/flounder/

And it does some pretty nifty stuff.  It takes in data sets (x, y, z,
time) and displays then graphically, saving them to image files if
needed, and also creating the time sequences as animations if needed.

Was looking at it from a "performance tuning tool" point of view.  i.e.
Testing PostgreSQL performance with a bunch of settings, then stuffing
the results into a database, and then using something like Flounder for
visualising it.

It seems pretty simple, and Flounder seems like it might be the right
kind of tool for doing things like this.  Was emailing with Edward
Vigmond, the author of it, and he seems to think it'd be pretty easy to
implement too.

Now, I'm not a C++ coder, and as short of time as anyone, so I was
wondering if there is anyone here who'd be interested in helping out here.

:-)

Regards and best wishes,

Justin Clift



--
"My grandfather once told me that there are two kinds of people: those
who work and those who take the credit. He told me to try to be in the
first group; there was less competition there."
- Indira Gandhi


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



[HACKERS] Win32 Build Environment

2003-01-22 Thread mlw
I have been following the debate about the Windows build environment.

I would like to say that the build environment is not a real issue for 
Windows developers. For the most part Windows developers will be happy 
with a working binary and an interface library. The one is savvy enough 
to want to modify the source, they will be comfortable with installing 
cygwin.

Does anyone see this as unreasonable?


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] Call for objections: put back OIDs in CREATE TABLE AS/SELECT INTO

2003-01-22 Thread Christopher Kings-Lynne
> "Christopher Kings-Lynne" <[EMAIL PROTECTED]> writes:
> > Why don't you just include them by default, otherwise if WITHOUT OIDS
> > appears in the CREATE TABLE command, then don't include them ?
>
> Well, adding a WITHOUT OIDS option to CREATE TABLE AS would be a new
> feature, which I don't have the time/interest for.  I won't do anything
> to preclude someone else implementing that, though ;-)

Oh, I thought it already had it from the CREATE TABLE bit...  Does sound
like it would be a good ultimate solution tho.

Chris


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] Terrible performance on wide selects

2003-01-22 Thread Dann Corbit
[snip]
> For the disk case, why not have the start of the record 
> contain an array of offsets to the start of the data for each 
> column?  It would only be necessary to have a list for 
> variable fields.
> 
> So (for instance) if you have 12 variable fields, you would 
> store 12 integers at the start of the record.

You have to store this information anyway (for variable length objects).
By storing it at the front of the record you would lose nothing (except
the logical coupling of an object with its length).  But I would think
that it would not consume any additional storage.

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] Terrible performance on wide selects

2003-01-22 Thread Tom Lane
"Dann Corbit" <[EMAIL PROTECTED]> writes:
> For the disk case, why not have the start of the record contain an array
> of offsets to the start of the data for each column?  It would only be
> necessary to have a list for variable fields.

No, you'd need an entry for *every* column (or at least, every one to
the right of the first variable-width column or NULL).  That's a lot of
overhead, especially in comparison to datatypes like bool or int4 ...

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] Terrible performance on wide selects

2003-01-22 Thread Dann Corbit
[snip]
> So (for instance) if you have 12 variable fields, you would 
> store 12 integers at the start of the record.

Additionally, you could implicitly size the integers from the properties
of the column.  A varchar(255) would only need an unsigned char to store
the offset, but a varchar(8) would require an unsigned int.

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] [PERFORM] Proposal: relaxing link between explicit JOINs and execution order

2003-01-22 Thread Tom Lane
Josh Berkus <[EMAIL PROTECTED]> writes:
> I am very strongly in favor of this idea.   I would personally prefer it if 
> the Join collapsing parmeter could be set at query time through a SET 
> statement, but will of course defer to the difficulty level in doing so.

I guess I failed to make it clear that that's what I meant.  GUC
variables are those things that you can set via SET, or in the
postgresql.conf file, etc.  These values would be just as manipulable
as, say, ENABLE_SEQSCAN.

> How about:
> EXPLICIT_JOIN_MINIMUM
> and
> FROM_COLLAPSE_LIMIT

> Just to make the two params not sound so identical?

Hmm.  The two parameters would have closely related functions, so I'd
sort of think that the names *should* be pretty similar.

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] Terrible performance on wide selects

2003-01-22 Thread Dann Corbit
> -Original Message-
> From: Tom Lane [mailto:[EMAIL PROTECTED]] 
> Sent: Wednesday, January 22, 2003 4:18 PM
> To: Dann Corbit
> Cc: Steve Crawford; [EMAIL PROTECTED]; 
> [EMAIL PROTECTED]
> Subject: Re: [HACKERS] Terrible performance on wide selects 
> 
> 
> "Dann Corbit" <[EMAIL PROTECTED]> writes:
> > Why not waste a bit of memory and make the row buffer the maximum 
> > possible length? E.g. for varchar(2000) allocate 2000 characters + 
> > size element and point to the start of that thing.
> 
> Surely you're not proposing that we store data on disk that way.
> 
> The real issue here is avoiding overhead while extracting 
> columns out of a stored tuple.  We could perhaps use a 
> different, less space-efficient format for temporary tuples 
> in memory than we do on disk, but I don't think that will 
> help a lot.  The nature of O(N^2) bottlenecks is you have to 
> kill them all --- for example, if we fix printtup and don't 
> do anything with ExecEvalVar, we can't do more than double 
> the speed of Steve's example, so it'll still be slow.  So we 
> must have a solution for the case where we are disassembling 
> a stored tuple, anyway.
> 
> I have been sitting here toying with a related idea, which is 
> to use the heap_deformtuple code I suggested before to form 
> an array of pointers to Datums in a specific tuple (we could 
> probably use the TupleTableSlot mechanisms to manage the 
> memory for these).  Then subsequent accesses to individual 
> columns would just need an array-index operation, not a 
> nocachegetattr call.  The trick with that would be that if 
> only a few columns are needed out of a row, it might be a net 
> loss to compute the Datum values for all columns.  How could 
> we avoid slowing that case down while making the wide-tuple 
> case faster?

For the disk case, why not have the start of the record contain an array
of offsets to the start of the data for each column?  It would only be
necessary to have a list for variable fields.

So (for instance) if you have 12 variable fields, you would store 12
integers at the start of the record.

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] Terrible performance on wide selects

2003-01-22 Thread Tom Lane
"Dann Corbit" <[EMAIL PROTECTED]> writes:
> Why not waste a bit of memory and make the row buffer the maximum
> possible length?
> E.g. for varchar(2000) allocate 2000 characters + size element and point
> to the start of that thing.

Surely you're not proposing that we store data on disk that way.

The real issue here is avoiding overhead while extracting columns out of
a stored tuple.  We could perhaps use a different, less space-efficient
format for temporary tuples in memory than we do on disk, but I don't
think that will help a lot.  The nature of O(N^2) bottlenecks is you
have to kill them all --- for example, if we fix printtup and don't do
anything with ExecEvalVar, we can't do more than double the speed of
Steve's example, so it'll still be slow.  So we must have a solution for
the case where we are disassembling a stored tuple, anyway.

I have been sitting here toying with a related idea, which is to use the
heap_deformtuple code I suggested before to form an array of pointers to
Datums in a specific tuple (we could probably use the TupleTableSlot
mechanisms to manage the memory for these).  Then subsequent accesses to
individual columns would just need an array-index operation, not a
nocachegetattr call.  The trick with that would be that if only a few
columns are needed out of a row, it might be a net loss to compute the
Datum values for all columns.  How could we avoid slowing that case down
while making the wide-tuple case faster?

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] [PERFORM] Proposal: relaxing link between explicit JOINs and execution order

2003-01-22 Thread Josh Berkus
Tom,

I am very strongly in favor of this idea.   I would personally prefer it if 
the Join collapsing parmeter could be set at query time through a SET 
statement, but will of course defer to the difficulty level in doing so.

> Comments?  In particular, can anyone think of pithy names for these
> variables?  The best I'd been able to come up with is MAX_JOIN_COLLAPSE
> and MAX_FROM_COLLAPSE, but neither of these exactly sing...

How about:
EXPLICIT_JOIN_MINIMUM
and
FROM_COLLAPSE_LIMIT

Just to make the two params not sound so identical?

-- 
-Josh Berkus
 Aglio Database Solutions
 San Francisco


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] Terrible performance on wide selects

2003-01-22 Thread Dann Corbit
> -Original Message-
> From: Tom Lane [mailto:[EMAIL PROTECTED]] 
> Sent: Wednesday, January 22, 2003 4:04 PM
> To: Dann Corbit
> Cc: Steve Crawford; [EMAIL PROTECTED]; 
> [EMAIL PROTECTED]
> Subject: Re: [HACKERS] Terrible performance on wide selects 
> 
> 
> "Dann Corbit" <[EMAIL PROTECTED]> writes:
> > Maybe I don't really understand the problem, but it seems simple 
> > enough to do it once for the whole query.
> 
> We already do cache column offsets when they are fixed.  The 
> code that's the problem executes when there's a 
> variable-width column in the table
> --- which means that all columns to its right are not at 
> fixed offsets, and have to be scanned for separately in each 
> tuple, AFAICS.

Why not waste a bit of memory and make the row buffer the maximum
possible length?
E.g. for varchar(2000) allocate 2000 characters + size element and point
to the start of that thing.

If we have 64K rows, even at that it is a pittance.  If someone designs
10,000 row tables, then it will allocate an annoyingly large block of
memory, but bad designs are always going to cause a fuss.

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] [PERFORM] Proposal: relaxing link between explicit JOINs and execution order

2003-01-22 Thread Tom Lane
Sean Chittenden <[EMAIL PROTECTED]> writes:
> How about something that's runtime tunable via a SET/SHOW config var?

Er, that's what I was talking about.

> I know this is a can of worms, but what about piggy backing on an
> Oracle notation and having an inline way of setting this inside of a
> comment?

I don't want to go there ...

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] Terrible performance on wide selects

2003-01-22 Thread Tom Lane
"Dann Corbit" <[EMAIL PROTECTED]> writes:
> Maybe I don't really understand the problem, but it seems simple enough
> to do it once for the whole query.

We already do cache column offsets when they are fixed.  The code that's
the problem executes when there's a variable-width column in the table
--- which means that all columns to its right are not at fixed offsets,
and have to be scanned for separately in each tuple, AFAICS.

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] [PERFORM] Proposal: relaxing link between explicit JOINs and execution order

2003-01-22 Thread Sean Chittenden
> There's been some recent discussion about the fact that Postgres
> treats explicit JOIN syntax as constraining the actual join plan, cf
> http://www.ca.postgresql.org/users-lounge/docs/7.3/postgres/explicit-joins.html
> 
> This behavior was originally in there simply because of lack of time
> to consider alternatives.  I now realize that it wouldn't be hard to
> get the planner to do better --- basically, preprocess_jointree just
> has to be willing to fold JoinExpr-under-JoinExpr into a FromExpr
> when the joins are inner joins.
> 
> But in the meantime, some folks have found the present behavior to be
> a feature rather than a bug, since it lets them control planning time
> on many-table queries.  If we are going to change it, I think we need
> some way to accommodate both camps.
[snip]
> Comments?  In particular, can anyone think of pithy names for these
> variables?  The best I'd been able to come up with is
> MAX_JOIN_COLLAPSE and MAX_FROM_COLLAPSE, but neither of these
> exactly sing...

How about something that's runtime tunable via a SET/SHOW config var?
There are some queries that I have that I haven't spent any time
tuning and would love to have the planner spend its CPU thinking about
it instead of mine.  Setting it to 2 by default, then on my tuned
queries, setting to something obscenely high so the planner won't muck
with what I know is fastest (or so I think at least).

I know this is a can of worms, but what about piggy backing on an
Oracle notation and having an inline way of setting this inside of a
comment?

SELECT /* +planner:collapse_tables=12  */   ?
   ^^^ ^^^ ^^^
   system  variablevalue

::shrug::  In brainstorm mode.  Anyway, a few names:

auto_order_join
auto_order_join_max
auto_reorder_table_limit
auto_collapse_join
auto_collapse_num_join
auto_join_threshold

When I'm thinking about what this variable will do for me as a DBA, I
think it will make the plan more intelligent by reordering the joins.
My $0.02.  -sc

-- 
Sean Chittenden

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] Terrible performance on wide selects

2003-01-22 Thread Dann Corbit
> -Original Message-
> From: Tom Lane [mailto:[EMAIL PROTECTED]] 
> Sent: Wednesday, January 22, 2003 3:15 PM
> To: Steve Crawford
> Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]
> Subject: Re: [HACKERS] Terrible performance on wide selects 
> 
> 
> Steve Crawford sent me some profiling results for queries 
> involving wide tuples (hundreds of columns).
> 
> > Done, results attached. nocachegetattr seems to be the 
> likely suspect.
> 
> Yipes, you can say that again.
> 
>   %   cumulative   self  self total   
>  time   seconds   secondscalls  ms/call  ms/call  name
>  93.38 26.8126.81   885688 0.03 0.03  nocachegetattr
> 
> 0.000.00   1/885688  heapgettup [159]
> 0.000.00   1/885688  
> CatalogCacheComputeTupleHashValue [248]
> 0.000.00   5/885688  SearchCatCache [22]
>13.400.00  442840/885688  ExecEvalVar [20]
>13.400.00  442841/885688  printtup [12]
> [11]93.4   26.810.00  885688 nocachegetattr [11]
> 
> 
> Half of the calls are coming from printtup(), which seems 
> relatively easy to fix.
> 
>   /*
>* send the attributes of this tuple
>*/
>   for (i = 0; i < natts; ++i)
>   {
>   ...
>   origattr = heap_getattr(tuple, i + 1, typeinfo, 
> &isnull);
>   ...
>   }
> 
> The trouble here is that in the presence of variable-width 
> fields, heap_getattr requires a linear scan over the tuple 
> --- and so the total time spent in it is O(N^2) in the number 
> of fields.
> 
> What we could do is reinstitute heap_deformtuple() as the inverse of
> heap_formtuple() --- but make it extract Datums for all the 
> columns in a single pass over the tuple.  This would reduce 
> the time in printtup() from O(N^2) to O(N), which would 
> pretty much wipe out that part of the problem.
> 
> The other half of the calls are coming from ExecEvalVar, 
> which is a harder problem to solve, since those calls are 
> scattered all over the place.  It's harder to see how to get 
> them to share work.  Any ideas out there?

Is it possible that the needed information could be retrieved by
querying the system metadata to collect the column information?

Once the required tuple attributes are described, it could form a
binding list that allocates a buffer of sufficient size with pointers to
the required column start points.

Maybe I don't really understand the problem, but it seems simple enough
to do it once for the whole query.

If this is utter stupidity, please disregard and have a hearty laugh at
my expense.
;-)

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] Terrible performance on wide selects

2003-01-22 Thread Tom Lane
Steve Crawford sent me some profiling results for queries involving wide
tuples (hundreds of columns).

> Done, results attached. nocachegetattr seems to be the likely suspect.

Yipes, you can say that again.

  %   cumulative   self  self total   
 time   seconds   secondscalls  ms/call  ms/call  name
 93.38 26.8126.81   885688 0.03 0.03  nocachegetattr

0.000.00   1/885688  heapgettup [159]
0.000.00   1/885688  CatalogCacheComputeTupleHashValue 
[248]
0.000.00   5/885688  SearchCatCache [22]
   13.400.00  442840/885688  ExecEvalVar [20]
   13.400.00  442841/885688  printtup [12]
[11]93.4   26.810.00  885688 nocachegetattr [11]


Half of the calls are coming from printtup(), which seems relatively easy
to fix.

/*
 * send the attributes of this tuple
 */
for (i = 0; i < natts; ++i)
{
...
origattr = heap_getattr(tuple, i + 1, typeinfo, &isnull);
...
}

The trouble here is that in the presence of variable-width fields,
heap_getattr requires a linear scan over the tuple --- and so the total
time spent in it is O(N^2) in the number of fields.

What we could do is reinstitute heap_deformtuple() as the inverse of
heap_formtuple() --- but make it extract Datums for all the columns in
a single pass over the tuple.  This would reduce the time in printtup()
from O(N^2) to O(N), which would pretty much wipe out that part of the
problem.

The other half of the calls are coming from ExecEvalVar, which is a
harder problem to solve, since those calls are scattered all over the
place.  It's harder to see how to get them to share work.  Any ideas
out there?

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[HACKERS] Proposal: relaxing link between explicit JOINs and execution order

2003-01-22 Thread Tom Lane
There's been some recent discussion about the fact that Postgres treats
explicit JOIN syntax as constraining the actual join plan, cf
http://www.ca.postgresql.org/users-lounge/docs/7.3/postgres/explicit-joins.html

This behavior was originally in there simply because of lack of time to
consider alternatives.  I now realize that it wouldn't be hard to get
the planner to do better --- basically, preprocess_jointree just has to
be willing to fold JoinExpr-under-JoinExpr into a FromExpr when the
joins are inner joins.

But in the meantime, some folks have found the present behavior to be
a feature rather than a bug, since it lets them control planning time
on many-table queries.  If we are going to change it, I think we need
some way to accommodate both camps.

What I've been toying with is inventing a GUC variable or two.  I am
thinking of defining a variable that sets the maximum size of a FromExpr
that preprocess_jointree is allowed to create by folding JoinExprs.
If this were set to 2, the behavior would be the same as before: no
collapsing of JoinExprs can occur.  If it were set to a large number,
inner JOIN syntax would never affect the planner at all.  In practice
it'd be smart to leave it at some value less than GEQO_THRESHOLD, so
that folding a large number of JOINs wouldn't leave you with a query
that takes a long time to plan or produces unpredictable plans.

There is already a need for a GUC variable to control the existing
behavior of preprocess_jointree: right now, it arbitrarily uses
GEQO_THRESHOLD/2 as the limit for the size of a FromExpr that can be
made by collapsing FromExprs together.  This ought to be a separately
settable parameter, I think.

Comments?  In particular, can anyone think of pithy names for these
variables?  The best I'd been able to come up with is MAX_JOIN_COLLAPSE
and MAX_FROM_COLLAPSE, but neither of these exactly sing...

regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] Foreign key wierdness

2003-01-22 Thread Dave Page


> -Original Message-
> From: Tom Lane [mailto:[EMAIL PROTECTED]] 
> Sent: 22 January 2003 20:30
> To: Didier Moens
> Cc: Dave Page; PostgreSQL Hackers Mailing List
> Subject: Re: [HACKERS] Foreign key wierdness 
> 
> 
> > With two tables each containing some 20.000 entries, the fk creation
> > time between both of them increases from ~ 1.8 secs to ~ 221 secs.
> 
> Seems odd that the cost would get *that* much worse.  Maybe 
> we need to look at whether the FK checking queries need to 
> include explicit casts ...

Does seem like it needs some work, though it could be argued that the
columns should be of the same type (which they were of course until I
got clever and increased the size of autonumber column that pgAdmin
creates!).

Anyway, as always, many thanks for your help and putting our testing
back on a sensible course after much head scratching!! :-)

Regards, Dave.

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [HACKERS] Foreign key wierdness

2003-01-22 Thread Dave Page


> -Original Message-
> From: Didier Moens [mailto:[EMAIL PROTECTED]] 
> Sent: 22 January 2003 16:05
> To: Dave Page
> Cc: Tom Lane; PostgreSQL Hackers Mailing List
> Subject: Re: [HACKERS] Foreign key wierdness
> 
> 
> I did some extensive testing using PostgreSQL 7.3.1 (logs and results 
> available upon request), and the massive slowdown is NOT related to 
> qualified tablename syntax or (lack of) VACUUM ANALYZE, but to the 
> following change :
> 
> pgAdminII 1.4.2 :
> ---
> CREATE TABLE articles (
> article_id integer DEFAULT 
> nextval('"articles_article_id_key"'::text) NOT NULL,
> ...
> 
> pgAdminII 1.4.12 :
> 
> CREATE TABLE articles (
> article_id bigint DEFAULT 
> nextval('"articles_article_id_key"'::text) 
> NOT NULL,
> ...

I'd never have guessed that one, despite noting that particular change
in the code some time ago. Oh well...

Anyway, that was changed in pgAdmin because I got a couple of complaints
about it's use of int4 for migrated serial/autonumber columns. One user
was migrating some *very* large numbers from a SQL Server.

From what Tom has said in his reponse, I think the answer for you Didier
is to remap your integer columns to int8 instead of int4 and see what
happens. When I get a couple of minutes I will look at putting a Serials
as... Option in the type map.

Regards, Dave.

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] Release Scheduales: 7.2.4 & 7.3.2

2003-01-22 Thread Robert Treat
On Wed, 2003-01-22 at 14:23, Marc G. Fournier wrote:
> If anyone has any 'last minute' issues they would like to see in either,
> please speak now or forever hold your peace :)
> 

Can someone post a "changelog" for these releases? Also what tags will
be created/used in CVS?

Robert Treat 



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] WIn32 port

2003-01-22 Thread Jan Wieck
Viacheslav N Tararin wrote:
> 
> Hi.
> 
> Exists in CVS Win32 port sources?

Not yet.

I sent patches for a native Win32 port of v7.2.1 out a few days ago.
It's sure a couple weeks away before something against current CVS HEAD
comes out of that.

You can find the patches at

http://www.janwieck.net/win32_port


Jan

-- 
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] v7.3.1 psql against a v7.2.x database ...

2003-01-22 Thread Tom Lane
[EMAIL PROTECTED] writes:
> I'd support making psql 7.3 and forward be aware of the backend they 
> are connecting to, and support them being able to work against all 7.3+ 
> servers, but I still fail to see the pressing need for a backward-compatible 
> version when the correct one is always shipped with the server. 

The scenario where it's valuable involves multiple machines: if you
connect to another machine that is running an older Postgres, it'd be
nice not to have to have a matching psql installed locally.  For
example, consider someone who does development work (so has the latest
PG installed on his workstation) but also must admin a production box
with an older PG version installed there.  As things currently stand,
he has to keep an extra copy of psql on his workstation to use for
talking to the production server.

But, while I see the value in it, I'm not personally prepared to put in
the work needed to make it happen.

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [HACKERS] C++ coding assistance request for a visualisation

2003-01-22 Thread Greg Copeland
Have you tried IBM's OSS visualization package yet?  Sorry, I don't seem
to recall the name of the tool off the top of my head (Data Explorer??)
but it uses OpenGL (IIRC) and is said to be able to visualize just about
anything.  Anything is said to include simple data over time to complex
medical CT scans.


Greg


On Wed, 2003-01-22 at 12:19, Justin Clift wrote:
> Hi guys,
> 
> Is there anyone here that's good with C++ and has a little bit of time
> to add PostgreSQL support to a project?
> 
> There is a 4D visualisation program called Flounder:
> 
> http://www.enel.ucalgary.ca/~vigmond/flounder/
> 
> And it does some pretty nifty stuff.  It takes in data sets (x, y, z,
> time) and displays then graphically, saving them to image files if
> needed, and also creating the time sequences as animations if needed.
> 
> Was looking at it from a "performance tuning tool" point of view.  i.e.
> Testing PostgreSQL performance with a bunch of settings, then stuffing
> the results into a database, and then using something like Flounder for
> visualising it.
> 
> It seems pretty simple, and Flounder seems like it might be the right
> kind of tool for doing things like this.  Was emailing with Edward
> Vigmond, the author of it, and he seems to think it'd be pretty easy to
> implement too.
> 
> Now, I'm not a C++ coder, and as short of time as anyone, so I was
> wondering if there is anyone here who'd be interested in helping out here.
> 
> :-)
> 
> Regards and best wishes,
> 
> Justin Clift
-- 
Greg Copeland <[EMAIL PROTECTED]>
Copeland Computer Consulting


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: Windows Build System was: [HACKERS] Win32 port patches

2003-01-22 Thread [EMAIL PROTECTED]

Firebird uses a set of Borland command line tools and Borland's make, 
which they give away as a free download.  Even if you're compiling for 
Windows, the build process uses Borland's command line "make".  A batch 
build script copies makefiles from a single source directory and spreads 
them around the tree, then kicks off Borland's make.  For things to work 
successfully, you must download Borland's tools and install them 
together with setting a few environment variables by hand.  Borland 
command line tools are just a set of Unix utilities like grep, sed, 
make, (sh?) etc.  Once upon a time they required cygwin utilities, but 
managed to purge themselves of cygwin with the Borland utilities.  When 
they required cygwin, they also required some Borland utilities anyway. 
So they had a real reason for purging cygwin.  If someone thinks the 
cygwin package is too big, we could require the Borland utilities instead :)

For my 2 cents, I would say the project files should be a separate 
download.  Let someone build, test, and contribute them for particular 
versions of PostgreSQL.  I would only try to make the Visual Studio 
files work on true releases.  I would _not_ try to keep them updated in 
CVS or build them on the fly.   W3.org's libwww does it something like this.

bbaker



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] Windows Build System

2003-01-22 Thread Curtis Faith
Hannu Krosing asked:
> Does anyone know how MySQL and interbase/firebird do it ?
> 

>From the MySQL web site for version 4.0:

"The Windows binaries use the Cygwin library. Source code for the
version of Cygwin we have used is available on this page."

I think this offers a very big opportunity to differentiate. If we had
project support it would make PostgreSQL a more natural choice for
Windows developers.

Even in a shop that uses Unix servers with Windows front-ends, having a
database server that runs on your own machine makes it much easier to
debug, write code on airplanes, etc.

- Curtis





---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] Win32 port patches submitted

2003-01-22 Thread Peter Eisentraut
Jan Wieck writes:

> We focused on porting the programs. The goal was to have PostgreSQL
> running native on Win32 for a user. Having a nice and easy maintainable
> cross platform config, build and test environment for the developers is
> definitely something that still needs to be done (hint, hint).

I have prepared a little patch that makes room for a native Windows build
in our existing build framework.  The Cygwin port would be renamed to
"cygwin" and the new port takes over the "win" name.  I have prepared the
port specific template and makefile and extracted the dynaloader from your
patch, so that you can at least run configure under Cygwin or MinGW
successfully.

Then I suggest we merge in the obvious parts of your patch, especially the
renaming of various token constants, the shmem implementation, some
library function reimplementations.  In some cases I would like a bit more
abstraction so that we don't have so many #ifdef's.  (For example, we
could have a IsAbsolutePath() that works magically for all platforms.)

Then there are the hairy pieces.  You add a bunch of command-line options
that interact in puzzling way to communicate information about the fake
fork.  I think some of these are redundant, but it's hard to tell.

The reimplementation of various shell scripts in C is something that would
be a good idea on Unix as well for a number of reasons.  Unfortunately,
the ones you wrote have no chance of compiling under Unix, so we'll have
to do it again.  But that can happen in parallel to the other stuff.

Two quick questions:  Why PG_WIN32 and not just WIN32?  Can the ConsoleApp
thing be written in C so we don't have to get an extra C++ compiler for
one file (for those who don't want to use the Microsoft toolchain)?

-- 
Peter Eisentraut   [EMAIL PROTECTED]


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [PATCHES] targetlist functions part 1 (was [HACKERS] targetlist

2003-01-22 Thread Peter Eisentraut
Tom Lane writes:

> > With sufficiently blurred vision one might even find SQL99's clause
> >   ::=
> >   UNNEST   
> > applicable.  Or maybe not.
>
> Hm.  I'm not sure what UNNEST does, but now that you bring SQL99 into
> the picture, what about WITH?  That might solve the problem, because
> (I think) WITH tables are logically determined before the main SELECT
> begins to execute.

The SQL 200x draft defines a new clause TABLE (  ) as a possible , where the  is required to be a function call.  At the end this just boils
down to UNNEST, though.  UNNEST is defined in terms of a hairy recursive
join subquery with a LATERAL( ) around it.  LATERAL(subquery) is the same
as just (subquery) except that the scope clauses are different.  So I
think this is probably what we ought to look at.

-- 
Peter Eisentraut   [EMAIL PROTECTED]


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



[HACKERS] WIn32 port

2003-01-22 Thread Viacheslav N Tararin
Hi.

Exists in CVS Win32 port sources?

Thanks.


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] Foreign key wierdness

2003-01-22 Thread Tom Lane
Didier Moens <[EMAIL PROTECTED]> writes:
> I did some extensive testing using PostgreSQL 7.3.1 (logs and results 
> available upon request), and the massive slowdown is NOT related to 
> qualified tablename syntax or (lack of) VACUUM ANALYZE, but to the 
> following change :

> pgAdminII 1.4.2 :
> ---
> CREATE TABLE articles (
> article_id integer DEFAULT 
> nextval('"articles_article_id_key"'::text) NOT NULL,
> ...

> pgAdminII 1.4.12 :
> 
> CREATE TABLE articles (
> article_id bigint DEFAULT nextval('"articles_article_id_key"'::text) 
> NOT NULL,
> ...

Ah-hah, and I'll bet that the column being linked to this one by the
foreign key constraint is still an integer?

> With two tables each containing some 20.000 entries, the fk creation 
> time between both of them increases from ~ 1.8 secs to ~ 221 secs.

Seems odd that the cost would get *that* much worse.  Maybe we need to
look at whether the FK checking queries need to include explicit casts
...

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



[HACKERS] Namespace/Table Visibility Behavior Issues

2003-01-22 Thread D. Hageman

I didn't see this make it to the list.  I thought I would try again.

-- 
//\\
||  D. Hageman<[EMAIL PROTECTED]>  ||
\\//

-- Forwarded message --
Date: Sat, 18 Jan 2003 16:51:12 -0600 (CST)
From: D. Hageman <[EMAIL PROTECTED]>
To: PostgreSQL-development <[EMAIL PROTECTED]>
Subject: Namespace/Table Visibility Behavior Issues


Assume a database with a couple of namespaces.  Give two of these 
namespaces the names test_1 and test_2.  Under these namespaces create a 
couple of tables with the names: example, example_2, example_3.

set search_path to test_1, test_2;

In the psql client, using a standard \d you will only see the namespace 
test_1 listed and the tables underneath that.  test_2 will not be visible 
due to the fact they fail the pg_table_is_visible() check.  

I am not sure that is wise to do the pg_table_is_visible check on those 
commands.  In my humble opinion, those commands are for understanding the 
layout/structure/nature of the database.  If you can't see all your 
namespaces that you set in your search_path then it could distort ones 
understanding of the database.  

-- 
//\\
||  D. Hageman<[EMAIL PROTECTED]>  ||
\\//

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[HACKERS] Object Locks

2003-01-22 Thread Rod Taylor
Several objects have issues where you can drop them while a dependent is
being created -- a good example is Domains (and types in general) -- but
I'll hunt down others later (languages?).  The solution (as Tom stated
earlier) is to create a more generic lock tag.

1.  Modify LOCKTAG to include 'classId'. Rename 'relId' to 'objId', and
rename 'objId' to 'objsubId' -- very similar to dependency tracking.
Expands locktag by sizeof(Oid), but also allows locking anything that we
can attach a dependency to.  If it wasn't for Page locks, objId could be
a part of the union with xid.

typedef struct LOCKTAG
{
Oid objId;
Oid classId;
Oid dbId;
union
{
BlockNumber blkno;
TransactionId xid;
}   objsubId;

/*
 * offnum should be part of objId.tupleId above, but would increase
 * sizeof(LOCKTAG) and so moved here; currently used by userlocks
 * only.
 */
OffsetNumber offnum;

uint16  lockmethod; /* needed by userlocks */
} LOCKTAG;

2. Modify current locks to set classId to RelOid_pg_class in most
cases.  Transaction locks will set classId to XactLockTableId and objId
to InvalidOid.

3. Generate two new lock functions which allow locking of an arbitrary
classId with the intent that they will be used to lock / unlock
non-relation or transaction oriented locks (none of the current locks). 
The lock functions will only deal with 'AccessShare' and
'AccessExclusive' locks -- others will be rejected.

4. Lock problematic Type / Domain areas (classId RelOid_pg_type):
ALTER TABLE .. [ ADD | ALTER | DROP ] COLUMN (AccessShareLock)
CREATE TABLE -- one per column (AccessShareLock)
SELECT -- one per column in query (AccessShareLock)

DROP DOMAIN / DROP TYPE (AccessExclusive)
ALTER DOMAIN (AccessExclusive)

Since the below aren't visible, they don't require a lock:
CREATE DOMAIN / CREATE TYPE (AccessExclusive)


5. Continue the process with schemas, languages, casts, etc.  All of
which can be dropped while another entity is being created that uses it.

PSQL 1: CREATE SCHEMA sch;
PSQL 1: BEGIN; CREATE TABLE sch.tab (col integer);
PSQL 2: DROP SCHEMA sch;
PSQL 1: COMMIT;   -- Table tab is missing a schema entry.

The above works with almost all database objects that are not relations
(tables, views, indexes, sequences should be fine).


Potential Problems:

- This will add a huge number of 'AccessShare' locks, as every select
will need to lock the types, casts, etc. involved.

- You don't need to lock on pinned objects like the 'integer' type as
it's guarenteed not be to dropped.  If we create an ALTER TYPE command
this changes.  Do we kill a syscache lookup to prevent the lock --
probably not -- but it makes the above mentioned AccessShareLock count
much larger. 


-- 
Rod Taylor <[EMAIL PROTECTED]>

PGP Key: http://www.rbt.ca/rbtpub.asc



signature.asc
Description: This is a digitally signed message part


[HACKERS] Release Scheduales: 7.2.4 & 7.3.2

2003-01-22 Thread Marc G. Fournier

I'm running a little late on this, but we'd like to put out a release of
each of the above over the course of the next 2 weeks ...

What I'd like to aim for is v7.2.4 for Tuesday next week, and v7.3.2 for
the following week ... this way it doesn't pull efforts from one to get
the other out ...

If anyone has any 'last minute' issues they would like to see in either,
please speak now or forever hold your peace :)



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: Windows Build System was: [HACKERS] Win32 port patches

2003-01-22 Thread [EMAIL PROTECTED]
This is a multi-part message in MIME format.

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] Windows Build System

2003-01-22 Thread Curtis Faith
I (Curtis Faith) previously wrote:
> > The Visual C++ Workspaces and Projects files are actually
> > text files that have a defined format. I don't think the format is 
> > published but it looks pretty easy to figure out.

Hannu Krosing replied:
> will probably change between releases

Even if the format changes, the environment always has a converter that
updates the project and workspace files to the new format. In other
words, Visual C++ 6.0 reads 5.0 projects, 7.0 reads 6.0, etc.

The format is mostly a bunch of options specifications (which wouldn't
get touched) followed by a set of named groups of source files. Even if
the overall format changes, it will be much more likely to change in the
specifications rather than the way lists of source file formats are
specified.

A conversion tool, call it BuildWindowsProjectFile, would only need to:

1) Read in the template file (containing all the options specifications
and Visual C++ speficic stuff, debug and release target options,
libraries to link in, etc.) This part might change with new versions of
the IDE and would be manually created by someone with Visual C++
experience.

2) Read in the postgreSQL group/directory map, or alternately just
mirror the groups with the directories.

3) Output the files from the PostgreSQL directories in the appropriate
grouping according to the project format into the appropriate space in
the template.

An excerpt of the format follows:

# Begin Group "Access"
# Begin Group "Common"
# PROP Default_Filter "cpp;c;cxx"
# Begin Source File

SOURCE=.\access\common\heaptuple.c
# End Source File
# Begin Source File

SOURCE=.access\common\indextuple.c
# End Source File

... other files in access\common go here
# End Group

# Begin Group "Index"

# PROP Default_Filter "cpp;c;cxx"
# Begin Source File

SOURCE=.\access\index\genam.c
# End Source File
# Begin Source File

SOURCE=.access\index\indexam.c
# End Source File

... other files in access\index go here
# End Group

# End Group


As you can see, this is a really simple format, and the direct
folder/group mapping to PostgreSQL directory is pretty natural and
probably the way to go.

Using the approach I outline, it should be possible to have the Unix
make system automatically run the BuildWindowsProjectFile tool whenever
any makefile changes so the Windows projects would stay up to date
without additional work for Unix developers.

Hannu Krosing also wrote:
> (also I dont think you can easily compile C source on a
> C# compiler) ;/

I don't think it makes much sense target a compiler that won't compile
the source, therefore, if what you say is true, we shouldn't bother with
targeting C#.

- Curtis



---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



[HACKERS] C++ coding assistance request for a visualisation tool

2003-01-22 Thread Justin Clift
Hi guys,

Is there anyone here that's good with C++ and has a little bit of time
to add PostgreSQL support to a project?

There is a 4D visualisation program called Flounder:

http://www.enel.ucalgary.ca/~vigmond/flounder/

And it does some pretty nifty stuff.  It takes in data sets (x, y, z,
time) and displays then graphically, saving them to image files if
needed, and also creating the time sequences as animations if needed.

Was looking at it from a "performance tuning tool" point of view.  i.e.
Testing PostgreSQL performance with a bunch of settings, then stuffing
the results into a database, and then using something like Flounder for
visualising it.

It seems pretty simple, and Flounder seems like it might be the right
kind of tool for doing things like this.  Was emailing with Edward
Vigmond, the author of it, and he seems to think it'd be pretty easy to
implement too.

Now, I'm not a C++ coder, and as short of time as anyone, so I was
wondering if there is anyone here who'd be interested in helping out here.

:-)

Regards and best wishes,

Justin Clift

--
"My grandfather once told me that there are two kinds of people: those
who work and those who take the credit. He told me to try to be in the
first group; there was less competition there."
- Indira Gandhi



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] v7.3.1 psql against a v7.2.x database ...

2003-01-22 Thread Rod Taylor
On Wed, 2003-01-22 at 11:11, [EMAIL PROTECTED] wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
> 
> 
> > Is this very different from how it's done at present?
> 
> Yes. :)
> 
> I'd like to play Devil's Advocate a bit on the whole backward-compatible 
> psql issue. First, I have not seen a lot of clamor for this sort of thing. 
> Second, psql comes bundled with the project; you cannot build the 
> postgresql binary without getting the latest and greatest psql installed 
> as well. So it is not as if this is a standalone app that someone may not 
> have the latest version of. Having a working version of psql is actually 
> a prerequisite for releasing a new version of Postgres! Third, the changes 
> from 7.2 to 7.3 in psql were fairly severe with the addition of schemas, 
> and don't really lend themselves well to a rewrite to handle previous 
> versions. I recall someone (Tom?) asked if anyone wanted to step up 
> to the plate on making something like that some time ago, but nobody did.
> Fourth, my custom version is an enhanced 7.2, not a compatible 7.3, 
> so my existing work would not be too helpful in this case.
> 
> I'd support making psql 7.3 and forward be aware of the backend they 
> are connecting to, and support them being able to work against all 7.3+ 
> servers, but I still fail to see the pressing need for a backward-compatible 
> version when the correct one is always shipped with the server. 

New commands in psql are probably worth the upgrade right there --
especially in the case of 7.3.  Protocol changes make it all but
impossible (like what 7.4 is probably going to get).

But, 1/2 the problem can be removed if we stuff the logic into the
backend.  This includes moving the queries as well as the list of
available commands.

I say we abuse prepared queries.

Create a table in the database to hold the list of available psql
commands (\dt, \dD, \dS, etc.) along with a preparable query, and the
number of expected arguments:

CREATE TABLE pg_psql_commandset

-- Command as typed in psql (\dt, \dD, \dS, etc)
( command varchar(5)

-- Number of arguments after the command
, nargs smallint

-- The sql whose output will be used directly to display a table
, tablesql text

-- The sql whose output will be displayed as additional lineitems after
-- the table, like foreign keys and primary keys after the table
-- information
, extrasql text

-- A command may be different based on the number of arguments.
, primary key (command, nargs)
);

The entry for \dt would be:

command:\dt
nargs:  0
tablesql:   SELECT 
extrasql:   NULL


The entry for '\dt a' would be:

command:\dt
nargs:  1
tablesql:   SELECT .. WHERE relname like '^' || ? || '$' ...
extrasql:   SELECT key, name, text FROM 


Execution:
0. Start psql client
1. User types \dt.
2. psql checks to see if '\dt with 0 args' has been prepared (it hasn't)
3. Find info on \dt with 0 args, and pull it out from pg_psql_commandset
4. Since found, prepare \dt with 0 args (assume all args are text for
simplicity).  If a command isn't available, tell the user Postgresql 7.x
doesn't support command.
5. Fetch results of the tablesql and extrasql (where not null), and
display to the user.


So long as the structure of the pg_psql_commandset table doesn't change,
all future versions (barring a protocol change) should be able to use
any backend version with an appropriate command set for the backend.

The trickiest part is taking the *.* style args and using them
appropriately, but thats not too difficult.

tablesql and extrasql could (in some cases) simply call system functions
-- but I'd not want to hardcode the function structure as psql would
need to be taught how to deal with varying input types (may end up with
several versions of the function list).

It would be better if prepared statements didn't insist on knowledge of
the datatypes from the client -- but I think they can be discovered from
the context of the variable use instead (Neil?).

Anyway, it's just a thought.  psql in 7.4 is free game due to the
anticipated protocol change which will make prior versions
non-functioning anyway.

-- 
Rod Taylor <[EMAIL PROTECTED]>

PGP Key: http://www.rbt.ca/rbtpub.asc



signature.asc
Description: This is a digitally signed message part


Re: [HACKERS] v7.3.1 psql against a v7.2.x database ...

2003-01-22 Thread Daniel Kalchev
>>>[EMAIL PROTECTED] said:
 > I'd support making psql 7.3 and forward be aware of the backend they 
 > are connecting to, and support them being able to work against all 7.3+ 
 > servers, but I still fail to see the pressing need for a backward-compatible
  
 > version when the correct one is always shipped with the server. 

This hits the nail in the head. I was just counting the pros' and cons to 
upgrade an large production system from 7.2.3 to 7.3.x.

The trouble is, there is need to access both types of databases, new and old. 
Often from the same psql executable.

So psql should definitely be backward compatible!

Daniel


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [mail] Re: [HACKERS] Win32 port patches submitted

2003-01-22 Thread Lamar Owen
On Wednesday 22 January 2003 02:01, Dann Corbit wrote:
> Maybe because most of the machines in the world (by a titanic landslide)
> are Windoze boxes.

On the desktop, yes.  On the server, no.  PostgreSQL is nore intended for a 
server, no?  I can see the utility in having a development installation on a 
Win32 box, though.

> > people to do it. Usually Open Source guys run *NIX

> Taken a poll lately?

If Microsoft has its way there won't be any Open Source on Windows.  Well, 
PostgreSQL might squeak by due to the BSD license, but other licenses aren't 
so fortunate, and GPL is anathema to Microsoft.
-- 
Lamar Owen
WGCR Internet Radio
1 Peter 4:11


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] v7.3.1 psql against a v7.2.x database ...

2003-01-22 Thread greg

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


> Is this very different from how it's done at present?

Yes. :)

I'd like to play Devil's Advocate a bit on the whole backward-compatible 
psql issue. First, I have not seen a lot of clamor for this sort of thing. 
Second, psql comes bundled with the project; you cannot build the 
postgresql binary without getting the latest and greatest psql installed 
as well. So it is not as if this is a standalone app that someone may not 
have the latest version of. Having a working version of psql is actually 
a prerequisite for releasing a new version of Postgres! Third, the changes 
from 7.2 to 7.3 in psql were fairly severe with the addition of schemas, 
and don't really lend themselves well to a rewrite to handle previous 
versions. I recall someone (Tom?) asked if anyone wanted to step up 
to the plate on making something like that some time ago, but nobody did.
Fourth, my custom version is an enhanced 7.2, not a compatible 7.3, 
so my existing work would not be too helpful in this case.

I'd support making psql 7.3 and forward be aware of the backend they 
are connecting to, and support them being able to work against all 7.3+ 
servers, but I still fail to see the pressing need for a backward-compatible 
version when the correct one is always shipped with the server. 

- --
Greg Sabino Mullane [EMAIL PROTECTED]
PGP Key: 0x14964AC8 200301221120

-BEGIN PGP SIGNATURE-
Comment: http://www.turnstep.com/pgp.html

iD8DBQE+LsVsvJuQZxSWSsgRAlBtAJ95xL+YRgkSwE554ucIFjgAVoaj6ACeOzNs
nyenGFcy4lY2X3vrOJln/HY=
=I0Lw
-END PGP SIGNATURE-



---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: Windows Build System was: [HACKERS] Win32 port patches

2003-01-22 Thread Hannu Krosing
On Wed, 2003-01-22 at 15:34, Curtis Faith wrote:
> tom lane writes:
> > You think we should drive away our existing unix developers 
> > in the mere hope of attracting windows developers?  Sorry, it 
> > isn't going to happen.
> 
> Tom brings up a good point, that changes to support Windows should not
> add to the tasks of those who are doing the bulk of the work on Unixen.
> 
> I don't think, however, that this necessarily means that having Windows
> developers use Cygwin is the right solution. We need to come up with a
> way to support Windows Visual C++ projects without adding work to the
> other developers. 

Does anyone know how MySQL and interbase/firebird do it ?


> POSSIBLE SOLUTIONS:
> 
> The Visual C++ Workspaces and Projects files are actually text files
> that have a defined format. I don't think the format is published but it
> looks pretty easy to figure out.

will probably change between releases (also I dont think you can easily
compile C source on a C# compiler) ;/


-- 
Hannu Krosing <[EMAIL PROTECTED]>

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] 7.4 Wishlist

2003-01-22 Thread Antti Haapala

> > SHOW
> > 
> > 
> > I think 7.4 could and really should implement SHOW command similar to
> > MySQL. Listing tables/foreign keys/views and so isn't just psql problem,
> 
> Actually, in 7.4 I'd tell them to:
> 
> select * from information_schema.tables;
> 
> This is a far more portable method.

Yep. You know it was just a draft which wasn't actually meant to be
sent...

I think the contents of that information schema could and should be user
modifiable views...

I needed to administer one of my dbs on 7.3 from another computer with
psql 7.2.3 -- for the computer had PG 7.2.3 running, and \d on for example
views failed gratuituosly with error "relation pg_relcheck not found". Of
course the dumps, sql commands and tsv data etc. I needed were only on
that 7.2.3 machine...

Many interfaces need to enumerate tables & databases etc. I'm not
interested in having different versions of for example DBD::Pg... one, the
most up-to-date, version should do the job. Against whatever version of
postmaster I want to use it.

So maybe backwards compatibility could be introduced also (older clients &
applications against newer DBMS) by having different information schemas
for different client versions... don't know, how it should be actually
done. Maybe there would be schemas "info_compat_74", "info_compat_75"  
and "information_schema_80" in PostgreSQL 8.0 :P and "information_schema"  
could be alias to most appropriate of these, depending on client version
:) (dunno).

-- 
Antti Haapala 
+358 50 369 3535 
ICQ: #177673735


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: Windows Build System was: [HACKERS] Win32 port patches submitted

2003-01-22 Thread Curtis Faith
tom lane writes:
> You think we should drive away our existing unix developers 
> in the mere hope of attracting windows developers?  Sorry, it 
> isn't going to happen.

Tom brings up a good point, that changes to support Windows should not
add to the tasks of those who are doing the bulk of the work on Unixen.

I don't think, however, that this necessarily means that having Windows
developers use Cygwin is the right solution. We need to come up with a
way to support Windows Visual C++ projects without adding work to the
other developers. 

I believe this is possible and have outlined some ways at the end, but
first some rationale:

One of the biggest benefits to Open Source projects is the ability to
get in there and debug/fix problems using the source. PostgreSQL will
lose out to lesser DBs if there is no easy way to build and DEBUG the
source on Windows. This is true whether one admits that Windows sucks or
not.

A developer faced with the decision of choosing:

A) a system that has a native Windows Visual C++ project that runs and
compiles the release with no work.

B) a system that requires learning a new way of building, new tools, new
who knows what else.

will always choose A unless there is a very compelling reason to choose
B. There are plenty of reasons a clever (or even not so clever) Windows
developer can use to justify using MySQL or another open source DB
instead of PostgreSQL. It is a bit harder for the neophyte to find and
believe the compelling reasons to use PostgreSQL. We need to make it
easier to choose PostgreSQL not harder.

Think about it from this perspective. How many of you would even think
about working on a project if it required that you stop using your
favorite toolset, gmake? EMACS? grep? shell scripts? etc.?

Professional developers spend years honing their skills. Learning the
proper use of the tools involved is a very big part of the process.

IMHO, having a native port without native (read Visual C++) project
support is a a huge missed opportunity.

Further, lack of Windows project files implies that PostgreSQL just a
Unix port and that the Windows support is immature, whether the code is
well supported or not.

POSSIBLE SOLUTIONS:

The Visual C++ Workspaces and Projects files are actually text files
that have a defined format. I don't think the format is published but it
looks pretty easy to figure out.

The Visual C++ environment does not require dependency specification, it
builds dependency trees by keeping track of the #include files used
during preprocessing.

Because of this, it should be possible to:

A) Write a script/tool that reads the input files from Unix makefiles to
build a list of the files in PostgreSQL and place them in appropriate
projects.

or alternately:

B) A script/tool that recurses the directories and does the same sort of
thing. There could be some sort of mapping between directories and
projects in Visual C++.

In short, for most organizations being able to easily build using the
source is a prerequisite for USING an open source database, not just for
being part of the DEVELOPMENT effort.

-Curtis

P.S. I speak from personal experience, I would have been able to help
out a lot more if I didn't have to spend 90% of my time working with
PostgreSQL learning Unix (or relearning) and gnu tool issues. I don't
personally mind so much because I wanted to learn it better anyway, but
it has definitely limited my ability to help, so far. This is especially
true since I don't have the opportunity to immerse myself in
Unix/PostgreSQL for days at a time and suffer from significant switching
costs.





---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] Call for objections: put back OIDs in CREATE TABLE AS/SELECT INTO

2003-01-22 Thread Tom Lane
"Christopher Kings-Lynne" <[EMAIL PROTECTED]> writes:
> Why don't you just include them by default, otherwise if WITHOUT OIDS
> appears in the CREATE TABLE command, then don't include them ?

Well, adding a WITHOUT OIDS option to CREATE TABLE AS would be a new
feature, which I don't have the time/interest for.  I won't do anything
to preclude someone else implementing that, though ;-)

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] 7.4 Wishlist

2003-01-22 Thread Rod Taylor
On Wed, 2003-01-22 at 04:55, Antti Haapala wrote:
> SHOW
> 
> 
> I think 7.4 could and really should implement SHOW command similar to
> MySQL. Listing tables/foreign keys/views and so isn't just psql problem,


Actually, in 7.4 I'd tell them to:

select * from information_schema.tables;


This is a far more portable method.

-- 
Rod Taylor <[EMAIL PROTECTED]>

PGP Key: http://www.rbt.ca/rbtpub.asc



signature.asc
Description: This is a digitally signed message part


Re: [HACKERS] [mail] Re: Win32 port patches submitted

2003-01-22 Thread Igor Georgiev
http://www.janwieck.net/win32_port/notes.win32-ports.txt
How to compile this Win32 port
1) Requirements and 1-time settings:
1.1) Visual C++
You need VC++ 6.0 on ServicePack 5.

Oooh no, not MS stuff plz :(
Dev-Cpp is cool open source IDE, tha using a mingw port of gcc.
http://www.bloodshed.net/

- Original Message - 
From: "Jan Wieck" <[EMAIL PROTECTED]>
To: "Emmanuel Charpentier" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Wednesday, January 22, 2003 1:03 AM
Subject: Re: [HACKERS] [mail] Re: Win32 port patches submitted
> Emmanuel Charpentier wrote:
> > 
> > Mingw and mingw-ported tools ? That's a nice small and cozy unix-like
> > envoronment on tom of Windows. Add it emacs, and windoww becomes almost
> > tolerable ...
> 
> How good is the debugging support under mingW? Is it at least comparable
> to using gdb under unix? If not, you might find yourself all of the
> sudden out in cold ...
> Jan



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] 7.4 Wishlist

2003-01-22 Thread Antti Haapala

Ups... i sent an early draft of my post by accident, sorry...

-- 
Antti Haapala
+358 50 369 3535
ICQ: #177673735


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[HACKERS] 7.4 Wishlist

2003-01-22 Thread Antti Haapala

SHOW


I think 7.4 could and really should implement SHOW command similar to
MySQL. Listing tables/foreign keys/views and so isn't just psql problem,
but common to many interface implementations and maintenance tools. I 
think it's wrong to rely on pg_* system tables on these. 

If you think of some probable new user (changing from MySQL) who asks how
to query list of tables in PostgreSQL and gets the answer (from psql -E)  
that is:

SELECT n.nspname as "Schema",
  c.relname as "Name",
  CASE c.relkind WHEN 'r' THEN 'table' WHEN 'v' THEN 'view' WHEN 'i' THEN 
'index' WHEN 'S' THEN 'sequence' WHEN 's' THEN 'special' END as "Type",
  u.usename as "Owner"
FROM pg_catalog.pg_class c
 LEFT JOIN pg_catalog.pg_user u ON u.usesysid = c.relowner
 LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
WHERE c.relkind IN ('r','v','S','')
  AND n.nspname NOT IN ('pg_catalog', 'pg_toast')
  AND pg_catalog.pg_table_is_visible(c.oid)
ORDER BY 1,2;

opposed to something like

SHOW TABLES_AND_VIEWS

as in MySQL, which DBMS would this user prefer?

I further suggest that these SHOW command parameters (like tables and
views) could be views in special system schema, so it could be easy to
update them just by changing templates. Maybe 7.5 (if it had introduced
new features) could provide downward compatibility to 7.4 ready clients by
allowing the use of SHOW-views from different schema (like
pg_show_743_compat)

:)


Stored procedures used in implementing new syntax
-

This is an implementation detail suggestion. Would it be possible that 
new syntax in SQL could be implemented in different languages than C.

We








---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])