Re: [HACKERS] Re: [Oledb-dev] double precision error with pg linux server, but not with windows pg server

2007-05-21 Thread Martijn van Oosterhout
On Mon, May 21, 2007 at 11:58:35PM -0400, Tom Lane wrote:
> The intent of the FP binary I/O code we have is that for platforms
> supporting IEEE-compliant FP formats, the on-the-wire representation
> should be uniformly big-endian, same as is true for integers.  So
> I would concur with a patch that ensures that this is what happens
> on the different ARM variants ... though I'll still be interested
> to see how you make that happen given the rather poor visibility
> into which model and endianness we are running on.

Well, I have an idea how you might do this: figure out the ordering of
the float at runtime. You can easily construct a float with any given
bit pattern. You can then examine the bytes to determine the order and
build a mapping table to reorder them. The program below creates a
float with the bit pattern 01020304. You can then examine the bits of
the float to determine the rearranging needed. You could do the same
for 64-bit floats.

This is obviously only needed for systems where the order can't be
determined at compile time.

ldexp is in SVr4, 4.3BSD and C89.

#include 
#include 

int main()
{
float f = ldexp(1.0,-119) + ldexp(1.0,-125) + ldexp(1.0,-126) + 
ldexp(1.0,-133) + ldexp(1.0,-142);
unsigned char *a = (char*)&f;
printf("Float: %g, char: %02x%02X%02X%02X\n", f, a[0], a[1], a[2], a[3]);
}

Have a nice day,
-- 
Martijn van Oosterhout   <[EMAIL PROTECTED]>   http://svana.org/kleptog/
> From each according to his ability. To each according to his ability to 
> litigate.


signature.asc
Description: Digital signature


Re: [PATCHES] [HACKERS] Full page writes improvement, code update

2007-05-21 Thread Koichi Suzuki

I really appreciate for the modification.

I also believe XLOG_NOOP is cool to maintains XLOG format consistent.
I'll continue to write a code to produce incremental log record from
the full page writes as well as  too maintain CRC, XLOOG_NOOP and
other XLOG locations,I also found that you've added information on
btree strip log records, which anables to produce corresponding
incremental logs from the full page writes.


2007/5/21, Tom Lane <[EMAIL PROTECTED]>:

Koichi Suzuki <[EMAIL PROTECTED]> writes:
> As replied to "Patch queue triage" by Tom, here's simplified patch to
> mark WAL record as "compressable", with no increase in WAL itself.
> Compression/decompression commands will be posted separately to PG
> Foundary for further review.

Applied with some minor modifications.  I didn't like the idea of
suppressing the sanity-check on WAL record length; I think that's
fairly important.  Instead, I added a provision for an XLOG_NOOP
WAL record type that can be used to fill in the extra space.
The way I envision that working is that the compressor removes
backup blocks and converts each compressible WAL record to have the
same contents and length it would've had if written without backup
blocks.  Then, it inserts an XLOG_NOOP record with length set to
indicate the amount of extra space that needs to be chewed up --
but in the compressed version of the WAL file, XLOG_NOOP's "data
area" is not actually stored.  The decompressor need only scan
the file looking for XLOG_NOOP and insert the requisite number of
zero bytes (and maybe recompute the XLOG_NOOP's CRC, depending on
whether you want it to be valid for the short-format record in the
compressed file).  There will also be some games to be played for
WAL page boundaries, but you had to do that anyway.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq




--
--
Koichi Suzuki

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] Re: [Oledb-dev] double precision error with pg linux server, but not with windows pg server

2007-05-21 Thread Tom Lane
Shachar Shemesh <[EMAIL PROTECTED]> writes:
> As for the ARM architecture, I've pulled my sources, and the answer is
> this: ARM doesn't have one standard floating point format. Different ARM
> architectures will use different formats. Most architectures will
> actually use IEEE, but some will use decimal based and such.

Okay, I spent some time googling this question, and I can't find any
suggestion that any ARM variant uses non-IEEE-compliant float format.
What *is* real clear is that depending on ARM model and a run time (!)
CPU endianness flag, there are three or four different possibilities
for the endianness of the data, including a PDP-endian-like alternative
in which the order of the high and low words is at variance with the
order of bytes within the words.  (Pardon me while I go vomit...)

The intent of the FP binary I/O code we have is that for platforms
supporting IEEE-compliant FP formats, the on-the-wire representation
should be uniformly big-endian, same as is true for integers.  So
I would concur with a patch that ensures that this is what happens
on the different ARM variants ... though I'll still be interested
to see how you make that happen given the rather poor visibility
into which model and endianness we are running on.

PS: Of course this does not resolve the generic issue of what to do
with platforms that have outright non-IEEE-format floats.  But at the
moment I don't see evidence that we need reach that issue for ARM.

PPS: I'm sort of wondering if the PDP-endian business doesn't afflict
int8 too on this platform.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[HACKERS] initializing the database cluster

2007-05-21 Thread Islam Hegazy
Hi all

I installed the source of PostgreSQL 8.2.3 on a linux machine. The installation 
process ended successfully but when I try to create the database cluster using 
the initdb command I get the following error:
"
creating conversions ... sh: line 1:  1838 Segmentation fault  (core 
dumped) "/home/grads/imehegaz/MyPostgreSQL/bin/postgres" --single -F -O -c 
search_path=pg_catalog -c exit_on_error=true template1 >/dev/null
child process exited with exit code 139
"
I installed the same source code before several time but this is the first time 
to get this error. Any idea of what could be the problem?

Regards
Islam Hegazy

Re: [HACKERS] pg_get_tabledef

2007-05-21 Thread John DeSoi

Hi Usama,

On May 21, 2007, at 9:20 AM, Usama Munir wrote:

i wanted to submit a patch for this, IFF the community wants this  
function. The rationale is obviously to help Application developers  
writing applications like pgAdmin. Currently this part of SQL needs  
to be constructed manually for postgres by the tools.


I would like to have this functionality (yesterday :) -- I'm  
currently working on this for pgEdit. Let me know if you need help  
with review or testing.





John DeSoi, Ph.D.
http://pgedit.com/
Power Tools for PostgreSQL


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] Re: [Oledb-dev] double precision error with pg linux server, but not with windows pg server

2007-05-21 Thread Tom Lane
Shachar Shemesh <[EMAIL PROTECTED]> writes:
> As for the ARM architecture, I've pulled my sources, and the answer is
> this: ARM doesn't have one standard floating point format. Different ARM
> architectures will use different formats.

So how will you know which one is in use, which I'd think you'd need to
know to translate it?

> Most architectures will
> actually use IEEE, but some will use decimal based and such. According
> to my source (a distributor of ARM based hardware), none of the other
> formats will lose precision if translated to IEEE.

Your source appears fairly ignorant of things-float.  If they really are
using decimal FP, it's easy to demonstrate that a lossless conversion
to/from binary representation of similar size is impossible.  The set of
exactly representable values is simply different.

I have no objection to standardizing on IEEE-on-the-wire if you can
prove that's a superset of everything else.  Unfortunately, you can't,
because it isn't.  As soon as you get into lossy conversions, you might
as well use text and avoid the issue.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Re: [Oledb-dev] double precision error with pg linux server, but not with windows pg server

2007-05-21 Thread Kevin Grittner
>>> On Mon, May 21, 2007 at  9:02 AM, in message <[EMAIL PROTECTED]>,
Shachar Shemesh <[EMAIL PROTECTED]> wrote: 
> 
>> We have fought many years to get closer to IEEE 754 
>> conformance.
> 
> Please notice that the format I offered *is* IEEE. In fact, what I'm
> offering is to export the binary in IEEE format EVEN IF THE NATIVE
> FORMAT ISN'T.
> 
> As for the ARM architecture, I've pulled my sources, and the answer is
> this: ARM doesn't have one standard floating point format. Different ARM
> architectures will use different formats. Most architectures will
> actually use IEEE, but some will use decimal based and such. According
> to my source (a distributor of ARM based hardware), none of the other
> formats will lose precision if translated to IEEE.
> 
> So, I will repeat my original question. I can write portable code that
> will translate the native format to IEEE (if it's not already the same).
> It seems that it will be good enough for all platforms discussed here.
> Failing that, we can adopt my later proposal which is IEEE + status for
> all places where that is good enough.
 
It sounds to me like there are two issues:
 
(1)  How do you get things to work under the current communications protocol?
 
(2)  Should PostgreSQL consider moving toward a platform independent binary 
protocol in some future release?
 
Based on what Tom and others have said, you need to resort to text 
representation for portability with the current protocol.  You might be 
surprised at how minimal the impact is, especially if the CPUs aren't saturated.
 
Clearly a platform independent protocol is possible.  I send binary information 
between machines with different hardware and operating systems all the time.  A 
big question for any PostgreSQL implementation of this has been whether any of 
the internal representations used on supported platforms are incapable of IEEE 
representation without data loss.  You've asserted that you've done research 
which shows compatibility.  Can anyone show a counter-example, where IEEE 
representation on the wire would not work?
 
For reference on how Java has addressed this issue for floating point numbers 
and how they go over the wire, see:
 
http://java.sun.com/docs/books/jvms/second_edition/html/Concepts.doc.html#33377
 
http://java.sun.com/docs/books/jvms/second_edition/html/Overview.doc.html#28147
 
http://java.sun.com/j2se/1.5.0/docs/api/java/lang/Double.html#doubleToLongBits(double)
 
http://java.sun.com/j2se/1.5.0/docs/api/java/lang/Double.html#doubleToRawLongBits(double)
 
http://java.sun.com/j2se/1.5.0/docs/api/java/io/DataOutput.html#writeLong(long)
 
-Kevin
 


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] COPY into a view; help w. design & patch

2007-05-21 Thread Tom Lane
"Karl O. Pinc" <[EMAIL PROTECTED]> writes:
> On 05/19/2007 12:41:47 PM, Tom Lane wrote:
>> There's been previous discussion of allowing BEFORE INSERT triggers
>> on views, so long as the triggers always return NULL to suppress
>> the actual insertion attempt (ie, we'd move the "can't insert into
>> view" test out of the rewriter and put it downstream of trigger firing
>> in the executor).  So far no one's figured out how to make that idea
>> work for UPDATE/DELETE, but maybe you could argue that even if it
>> only worked for INSERT it'd be a useful feature.  It'd certainly solve
>> the problem for COPY.

> Disclaimer: At least some of that discussion was my fault.

> I'd be happy to go in that direction, but arguing for
> having BEFORE INSERT triggers work for
> INSERT and not on UPDATE and DELETE seems tough.  It's a
> little like driving blind in one eye, you can see _partly_
> where you're going but not all the way to the end of the road.
> I'd be afraid it'd be a bad design decision that would
> artificially constrain later work.

Well, as I noted earlier, making triggers work for views seems like
it'd be more useful than adding more dependencies on the current
rule system.

> The problem with UPDATE and DELETE on BEFORE
> triggers is coming up with an OLD row.

No, that's the easy part, since as you note you can just compute the
view and extract the rows meeting the WHERE condition.  And in the
UPDATE case you need to compute a NEW row with the updated values
substituted, but that seems just a small matter of programming as well.
The issues I see are:

* Existing triggers expect to see a ctid identifier for each updated
or deleted row.  Is it OK to just say that you don't get that in a
trigger for a view?

* What about INSERT/UPDATE/DELETE RETURNING?  The current definition of
triggers gives them no way to specify what is computed for RETURNING.

> The big problem that came up revolves around what
> to do if the view does not contain any/enough
> primary keys.

I don't think that's a problem we need to solve.  If a user wants to
update his views it's up to him to define them in a way that gives
the trigger enough information to do its job.  This is the SQL spec's
approach (no pkey -> view not updatable), and I don't think we need
to invent weird new concepts to do better.

regards, tom lane

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [HACKERS] pg_get_tabledef

2007-05-21 Thread Tom Lane
"Usama Munir" <[EMAIL PROTECTED]> writes:
> I guess the way forward for me would be to crawl back in my corner, 
> write up a mini - spec of how i intend to implement it and get back to 
> you guys.

Well, the *first* thing to do is read pg_dump for awhile.  Until you've
grokked what it does to support multiple backend versions and how it
handles inter-object dependencies (dump ordering problems), you won't be
able to write a spec that has any credibility.  In particular, the
business about breaking circular dependency loops is something I have
no idea how to handle in a simple "dump library" API.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[HACKERS] best_inner_indexscan vs. reality

2007-05-21 Thread Tom Lane
I looked into the curious planner behavior described in this thread:
http://archives.postgresql.org/pgsql-performance/2007-05/msg00388.php
and after a bit of experimentation was able to duplicate it in the
regression database:

regression=# explain select * from int4_tbl a where f1 in (select hundred from 
tenk1 b);
QUERY PLAN 
---
 Nested Loop IN Join  (cost=0.00..30.20 rows=5 width=4)
   Join Filter: (a.f1 = b.hundred)
   ->  Seq Scan on int4_tbl a  (cost=0.00..1.05 rows=5 width=4)
   ->  Seq Scan on tenk1 b  (cost=0.00..458.00 rows=1 width=4)
(4 rows)

regression=# set enable_bitmapscan TO 0;
SET
regression=# explain select * from int4_tbl a where f1 in (select hundred from 
tenk1 b);
  QUERY PLAN
   
---
 Nested Loop IN Join  (cost=0.00..13.21 rows=5 width=4)
   ->  Seq Scan on int4_tbl a  (cost=0.00..1.05 rows=5 width=4)
   ->  Index Scan using tenk1_hundred on tenk1 b  (cost=0.00..242.00 rows=100 
width=4)
 Index Cond: (b.hundred = a.f1)
(4 rows)

WTF?  How can disabling an unrelated plan type cause the thing to find a
cheaper plan than it found otherwise?

After some digging, the problem can be laid at the feet of
best_inner_indexscan, whose comment indeed foresees this issue:

 * best_inner_indexscan
 *Finds the best available inner indexscan for a nestloop join
 *with the given rel on the inside and the given outer_rel outside.
 *May return NULL if there are no possible inner indexscans.
 *
 * We ignore ordering considerations (since a nestloop's inner scan's order
 * is uninteresting).  Also, we consider only total cost when deciding which
 * of two possible paths is better --- this assumes that all indexpaths have
 * negligible startup cost.  (True today, but someday we might have to think
 * harder.)  Therefore, there is only one dimension of comparison and so it's
 * sufficient to return a single "best" path.

The "best" inner indexscan for this query is a bitmap scan with startup
cost of 5 and total cost of 170 (beating the plain indexscan's total
cost of 242).  However, for this IN join that's estimated as a worse
choice than the seqscan, precisely because of its startup cost.
We estimate a nestloop IN-join on the assumption that we'll stop
scanning after fetching one matching inner tuple; therefore, if there
are lots of potentially matching inner tuples, both a seqscan and an
indexscan look pretty cheap (less than 5 cost units to find the first
match), while the bitmap scan loses because of its startup cost.
Disabling bitmap scans allows the regular indexscan to be seen as
cheapest by best_inner_indexscan, and so it survives to be chosen as the
join partner.

Clearly, best_inner_indexscan's API is obsolete now that bitmap scans
exist.  What I'm inclined to do is make it cache and return both the
cheapest-total and cheapest-startup plans for any given combination of
rels.  (This should add only negligibly to its cost, since it's
enumerating all the possible paths anyway.)  To avoid useless effort
in match_unsorted_outer(), it should probably consider the
cheapest-startup only when doing JOIN_IN cases --- AFAICS there isn't
any other case where startup cost can outweigh total cost for the inside
of a nestloop.

I'm not sure if it's worth considering this issue for
best_appendrel_indexscan() --- in an Append situation it's not clear
that startup costs of individual subplans mean much of anything.

Comments?

Also, should this be backpatched?  Both 8.1 and 8.2 have got the issue;
prior releases no, since they didn't have bitmap scans.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] COPY into a view; help w. design & patch

2007-05-21 Thread Karl O. Pinc


On 05/21/2007 01:59:36 PM, Tom Lane wrote:


I think that the wave of the future is probably to figure out a way to
provide trigger support for views.


I put forward a possible way to do trigger support
for views in a previous email and would appreciate comment.
(http://archives.postgresql.org/pgsql-hackers/2007-05/msg00906.php)

Really, I'd like to know if I should just give up and move
on or if I can do something to help advance Postgres towards
something that will solve my copy-into-a-view problems.


Karl <[EMAIL PROTECTED]>
Free Software:  "You don't pay back, you pay forward."
 -- Robert A. Heinlein


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] pg_get_tabledef

2007-05-21 Thread Andrew Dunstan



Usama Munir wrote:
I think using pg_dump in some cases is a good option , but not all the 
time, having a function makes it much cleaner to use


That's why having a shared pgdump library as has been previously 
mentioned is by far the best solution.


We have discussed this before, and factoring out this functionality into 
a shared lib is what needs to be done. I'm not convinced it is as much 
work as Tom suggests, but it is certainly a non-trivial task.


cheers

andrew

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] COPY into a view; help w. design & patch

2007-05-21 Thread Tom Lane
"Karl O. Pinc" <[EMAIL PROTECTED]> writes:
> When I say I write and execute an INSERT statement I mean
> that the INSERT statement into the view is executed just as if the user
> wrote it -- it is passed through the rule system and turns into
> whatever INSERT or other statements the user has
> associated with INSERTing into the view.

The problem with this line of argument is that it starts from the premise
that rule support for INSERTs is fine and dandy, and all we lack is that
COPY isn't paying attention to it.  This premise is not in accord with
reality --- reality is that the rule system sucks for a number of
reasons, the main one being multiple-evaluation risks.  I can't count
how many times I've told newbies to forget trying to use a rule and
instead use a trigger for whatever they were trying to accomplish.
Take a look also at the so-far-failed attempts to implement SQL-spec
updatable views on the basis of the current rule system.

I think that the wave of the future is probably to figure out a way to
provide trigger support for views.  Or maybe we must throw out the
current rule system and start over.  Neither plan makes it sound
attractive to make COPY depend on the current rule system.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] pg_get_tabledef

2007-05-21 Thread Andrew Dunstan



Usama Munir wrote:
Got it. Thanks for the clarification , i suppose the way you described 
it , it  needs to go over libpq for the database interface, not the 
HeapTuple  / Form_pg_* ,  way.





Yes, for many reasons including those mentioned in Tom's email on this 
subject today.


cheers

andrew

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] pg_get_tabledef

2007-05-21 Thread Usama Munir
Got it. Thanks for the clarification , i suppose the way you described 
it , it  needs to go over libpq for the database interface, not the 
HeapTuple  / Form_pg_* ,  way.


I guess the way forward for me would be to crawl back in my corner, 
write up a mini - spec of how i intend to implement it and get back to 
you guys.


Thanks for your feedback.

Regards,
Usama Munir.



Andrew Dunstan wrote:


I mean as a shared library - a .so for Unix (or whatever the flavor of 
unix uses instead) or a DLL on WIndows. And no, it would not be in 
contrib - as I mentioned in another thread yesterday I want to propose 
that contrib disappear.


Certainly pg_dump would use the library, and retain all the file 
handling processing it does now. But we could also link it into psql, 
for example, and expose the results via \ commands.


If you want to have a go at that you'll probably make lots of people 
very happy.


cheers

andrew


Usama Munir wrote:
When you say pgdump library, do you mean taking all catalog querying 
functionality into a contrib like module , exposed as  functions and 
then have a simple pgdump executable which calls those functions to 
dump to a file, because you would still need a pgdump executable i 
suppose for people to be able to backup their stuff. Is my 
understanding somewhere near actual idea or i am way off here?



Are there any discussions on this topic which could give me a little 
more idea? because i would definitely like to take a shot at this.


Regards,
Usama Munir
EnterpriseDB (www.enterprisedb.com)


Andrew Dunstan wrote:



Usama Munir wrote:
I think using pg_dump in some cases is a good option , but not all 
the time, having a function makes it much cleaner to use


That's why having a shared pgdump library as has been previously 
mentioned is by far the best solution.


We have discussed this before, and factoring out this functionality 
into a shared lib is what needs to be done. I'm not convinced it is 
as much work as Tom suggests, but it is certainly a non-trivial task.


cheers

andrew




---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] pg_get_tabledef

2007-05-21 Thread Andrew Dunstan


I mean as a shared library - a .so for Unix (or whatever the flavor of 
unix uses instead) or a DLL on WIndows. And no, it would not be in 
contrib - as I mentioned in another thread yesterday I want to propose 
that contrib disappear.


Certainly pg_dump would use the library, and retain all the file 
handling processing it does now. But we could also link it into psql, 
for example, and expose the results via \ commands.


If you want to have a go at that you'll probably make lots of people 
very happy.


cheers

andrew


Usama Munir wrote:
When you say pgdump library, do you mean taking all catalog querying 
functionality into a contrib like module , exposed as  functions and 
then have a simple pgdump executable which calls those functions to 
dump to a file, because you would still need a pgdump executable i 
suppose for people to be able to backup their stuff. Is my 
understanding somewhere near actual idea or i am way off here?



Are there any discussions on this topic which could give me a little 
more idea? because i would definitely like to take a shot at this.


Regards,
Usama Munir
EnterpriseDB (www.enterprisedb.com)


Andrew Dunstan wrote:



Usama Munir wrote:
I think using pg_dump in some cases is a good option , but not all 
the time, having a function makes it much cleaner to use


That's why having a shared pgdump library as has been previously 
mentioned is by far the best solution.


We have discussed this before, and factoring out this functionality 
into a shared lib is what needs to be done. I'm not convinced it is 
as much work as Tom suggests, but it is certainly a non-trivial task.


cheers

andrew




---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] pg_get_tabledef

2007-05-21 Thread Usama Munir
When you say pgdump library, do you mean taking all catalog querying 
functionality into a contrib like module , exposed as  functions and 
then have a simple pgdump executable which calls those functions to dump 
to a file, because you would still need a pgdump executable i suppose 
for people to be able to backup their stuff. Is my understanding 
somewhere near actual idea or i am way off here?



Are there any discussions on this topic which could give me a little 
more idea? because i would definitely like to take a shot at this.


Regards,
Usama Munir
EnterpriseDB (www.enterprisedb.com)


Andrew Dunstan wrote:



Usama Munir wrote:
I think using pg_dump in some cases is a good option , but not all 
the time, having a function makes it much cleaner to use


That's why having a shared pgdump library as has been previously 
mentioned is by far the best solution.


We have discussed this before, and factoring out this functionality 
into a shared lib is what needs to be done. I'm not convinced it is as 
much work as Tom suggests, but it is certainly a non-trivial task.


cheers

andrew


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] COPY into a view; help w. design & patch

2007-05-21 Thread Karl O. Pinc


On 05/21/2007 12:17:38 PM, Jim C. Nasby wrote:

On Mon, May 21, 2007 at 05:02:29PM +, Karl O. Pinc wrote:
>
> On 05/21/2007 11:23:57 AM, Jim C. Nasby wrote:
>
> >What about adding COPY support to rules? ISTM if you want to copy
into
> >a
> >view you probably want to insert into it as well, so why not use
the
> >same mechanism? Presumably a COPY rule would also be faster than a
> >trigger.
>
> I'd say there's no difference between the rule you'd use
> for COPYing and the rule you'd use for INSERTing,
> which is why my patch produces an
> INSERT statement and then proceeds to (attempt
> to) execute the statement for every row of data
> to be copied.  If you don't have a rule that allows
> INSERT into the view you get (the already existing)
> error with a hint that tells you to make an INSERT
> rule.

As Tom mentioned, that's very non-transparent to users.

...

I don't think we understand each other.  (I sure don't
understand the non-transparency comment above.  I thought
Tom said that in regards having to write a special/new
COPY syntax in order to insert into a view, rather than
just using a name of a view instead of a name of a table.)

When I say I write and execute an INSERT statement I mean
that the INSERT statement into the view is executed just as if the user
wrote it -- it is passed through the rule system and turns into
whatever INSERT or other statements the user has
associated with INSERTing into the view.  The INSERT
statement must only be passed through the rule system once,
the resulting prepared statement is executed for every line
of COPY input.  This is exactly what you propose when
you say COPY should go through the rule system, except
that I'm using the INSERT rule to do the COPY rather
than have a separate COPY rule.  Or maybe not, see below.

 You're also

assuming that converting a COPY to a string of INSERTS (which would
then
get pushed through the rule system one-by-one) would be as efficient
as
just copying into a table. I don't believe that's the case.


I'm sure it's not.  But (IMO) anybody who's really concerned about
efficency shouldn't be using a view anyhow.  It's easy
enough to run the raw data through a awk script or something
and COPY into the underlying tables.  Views are for making things
easier to do.  It's not necessary for them to be as fast as possible
in all cases.  Again, in my opinion.


I haven't studied the rule code, but at least for the simple case of
redirecting a copy into a view to a single table (ie: a single
statement
INSTEAD rule that has no where clause) the copy command should be able
to be changed by the rule so that it's just inserting into a different
table. The performance should then be the same as if you copied
directly
into that table in the first place.


The problem comes when the user writes rules that insert into
mutiple tables, or do other random things.  At that point
you just want to "do what the user asked" when inserting
each row of data into the view because you really don't
know what the rules are going to expand into.



This doesn't mean that a row-by-row capability (or the ability to have
COPY generate insert statements) would be bad, but they are not the
same
as a simple rewrite of a COPY command (ie: adding COPY support to
rules).


Are you talking about having a COPY statement rewrite into a bunch
of COPY statments, and then applying the input data to each copy
statement in turn?  That sounds feasible.  There would be a certain
disconnect between such a COPY rule and the rest of the rule
system.  All the other sorts of rules can expand into any kind
of statement.  You could, in theory, have an INSERT rule
that deletes a row from table B for every row inserted into
table A.  (Probably in addition to inserting into table A.)
COPY rules couldn't work that way.  At least not without
all sorts of implimentation complication.  Maybe it doesn't
matter; you could argue that INSERT rules shouldn't do anything
but INSERT, ever.  But that's not enforced now.

It'd also be a little wierd to have an INSERT rule that behaves
differently from a COPY rule, updating different tables or whatever.


Karl <[EMAIL PROTECTED]>
Free Software:  "You don't pay back, you pay forward."
 -- Robert A. Heinlein


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] COPY into a view; help w. design & patch

2007-05-21 Thread Jim C. Nasby
On Mon, May 21, 2007 at 05:02:29PM +, Karl O. Pinc wrote:
> 
> On 05/21/2007 11:23:57 AM, Jim C. Nasby wrote:
> 
> >What about adding COPY support to rules? ISTM if you want to copy into
> >a
> >view you probably want to insert into it as well, so why not use the
> >same mechanism? Presumably a COPY rule would also be faster than a
> >trigger.
> 
> I'd say there's no difference between the rule you'd use
> for COPYing and the rule you'd use for INSERTing,
> which is why my patch produces an
> INSERT statement and then proceeds to (attempt
> to) execute the statement for every row of data
> to be copied.  If you don't have a rule that allows
> INSERT into the view you get (the already existing)
> error with a hint that tells you to make an INSERT
> rule.

As Tom mentioned, that's very non-transparent to users. You're also
assuming that converting a COPY to a string of INSERTS (which would then
get pushed through the rule system one-by-one) would be as efficient as
just copying into a table. I don't believe that's the case.

I haven't studied the rule code, but at least for the simple case of
redirecting a copy into a view to a single table (ie: a single statement
INSTEAD rule that has no where clause) the copy command should be able
to be changed by the rule so that it's just inserting into a different
table. The performance should then be the same as if you copied directly
into that table in the first place.

This doesn't mean that a row-by-row capability (or the ability to have
COPY generate insert statements) would be bad, but they are not the same
as a simple rewrite of a COPY command (ie: adding COPY support to rules).
-- 
Jim Nasby  [EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] COPY into a view; help w. design & patch

2007-05-21 Thread Karl O. Pinc


On 05/21/2007 11:23:57 AM, Jim C. Nasby wrote:


What about adding COPY support to rules? ISTM if you want to copy into
a
view you probably want to insert into it as well, so why not use the
same mechanism? Presumably a COPY rule would also be faster than a
trigger.


I'd say there's no difference between the rule you'd use
for COPYing and the rule you'd use for INSERTing,
which is why my patch produces an
INSERT statement and then proceeds to (attempt
to) execute the statement for every row of data
to be copied.  If you don't have a rule that allows
INSERT into the view you get (the already existing)
error with a hint that tells you to make an INSERT
rule.

Karl <[EMAIL PROTECTED]>
Free Software:  "You don't pay back, you pay forward."
 -- Robert A. Heinlein


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] COPY into a view; help w. design & patch

2007-05-21 Thread Jim C. Nasby
On Sat, May 19, 2007 at 01:41:47PM -0400, Tom Lane wrote:
> > I _could_ make tables that "correspond"
> > to the views and put BEFORE INSERT triggers on them and
> > have the triggers insert into the views (or the equalivent),
> > but then the users would have to use the views for most
> > things and the "corresponding tables" when doing a COPY
> > or using the application's data import function.
> 
> There's been previous discussion of allowing BEFORE INSERT triggers
> on views, so long as the triggers always return NULL to suppress
> the actual insertion attempt (ie, we'd move the "can't insert into
> view" test out of the rewriter and put it downstream of trigger firing
> in the executor).  So far no one's figured out how to make that idea
> work for UPDATE/DELETE, but maybe you could argue that even if it
> only worked for INSERT it'd be a useful feature.  It'd certainly solve
> the problem for COPY.

What about adding COPY support to rules? ISTM if you want to copy into a
view you probably want to insert into it as well, so why not use the
same mechanism? Presumably a COPY rule would also be faster than a
trigger.
-- 
Jim Nasby  [EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] pg_get_tabledef

2007-05-21 Thread Usama Munir




I think using pg_dump in some cases is a good option , but not all the
time, having a function makes it much cleaner to use

Consider pgAdmin lets say (and there are many such applications out
there) , you need to show object DDL on the RHP and its nicely
formatted and you can copy paste it and then perhaps export it in an
SQL file.

Now imagine you need to spawn a new process from inside the app for
pg_dump, and then make it write to a file and then read the file to
display the object DDL, which is possible but very messy looking code.
Then there are issues with launching external processes on certain
platforms (for example in Java if you start a new process from the
runtime(), you need to make sure you properly flush out its stdout and
stderr streams otherwise it can go in a deadlock etc), i would use a
function, if available anyday 

Additionally  there are such functions for other objects, but for
tables you needed to construct it manually, so i also thought this
would just complete the set and make it easier to write an SQL / DDL
exporting app.



Naz Gassiep wrote:

  Just a question, is there any advantage to having this then building a
function in applications that wrap and use pg_dump with a few options?
Surely that's a more appropriate way to achieve this functionality?
- Naz.


Usama Munir wrote:
  
  
Hi,

i was following a thread some time ago where adding a function
*pg_get_tabledef* was one of the TODOs for 8.2, but it somehow didn't
make it to the release perhaps because the functionality was not
clearly defined? not sure.

Anyway  i happen to come up with a function for pg_get_tabledef which
works something like following

/postgres=# select pg_get_tabledef(16388) ;
pg_get_tabledef

 CREATE TABLE  public.dept
 (
  deptno numeric(2,0) NOT NULL,
  dname  character varying(14),
  loccharacter varying(13)
 )
 WITHOUT OIDS;
/(1 row)



i wanted to submit a patch for this, IFF the community wants this
function. The rationale is obviously to help Application developers
writing applications like pgAdmin. Currently this part of SQL needs to
be constructed manually for postgres by the tools.

it is arguable that a table defintion will have constraints , triggers
etc as well, and they can be added without much problem, but i think
if a tool needs to construct an SQL for all table related objects then
functions are already available for them like pg_get_constraintdef,
pg_get_ruledef, pg_get_indexdef, pg_get_triggerdef etc

i understand that you guys don't like someone to develop a patch in
isolation and just come up with it one day, but it really came out as
a by-product of some other work , and i thought you guys might be
interested in it.

if it is desired,  i will submit  a patch for it, within a day or so.

Regards,
Usama Munir
EnterpriseDB (www.enterprisedb.com)




  
  
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
  





Re: [HACKERS] pg_get_tabledef

2007-05-21 Thread Tom Lane
Naz Gassiep <[EMAIL PROTECTED]> writes:
> Just a question, is there any advantage to having this then building a
> function in applications that wrap and use pg_dump with a few options?
> Surely that's a more appropriate way to achieve this functionality?

Refactoring pg_dump into some sort of library would clearly be a better
solution.  Unfortunately it's also a huge amount of work :-(

There are several reasons why trying to push pg_dump's functionality
into the backend is largely doomed to failure:

* pg_dump needs to be able to dump from older server versions, and
having two completely different code paths for servers before and after
version X would be a mess.

* pg_dump can't consider a table as a monolithic object anyway; problems
like breaking circular dependencies involving DEFAULT expressions
require getting down-and-dirty with the constituent elements.  If there
were a monolithic pg_get_table_def function, pg_dump couldn't use it.

* pg_dump ought to be dumping a snapshot of the DB as of its transaction
start time.  Most of the backend's catalog access works on SnapshotNow
and hence fails this test.  (I fear that we already have some issues
from the get_xxx_def functions that pg_dump uses now.)

regards, tom lane

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [HACKERS] COPY into a view; help w. design & patch

2007-05-21 Thread Karl O. Pinc


On 05/19/2007 12:41:47 PM, Tom Lane wrote:

"Karl O. Pinc" <[EMAIL PROTECTED]> writes:
> I don't really want to do this.  I really want my users
> to be able to use the COPY statement without worrying
> about whether they are copying into a table or a view.

But ... but ... the proposed feature entirely fails to achieve that.
Copying into an explicit INSERT statement isn't necessarily a bad
idea,
but surely it's not transparent in that way.


Sorry to be confusing.  The first sentence in my email
starting this thread was "I'm attempting a patch
that would allow the COPY
command to copy into a view."  That's what the
patch does.

The part of the first email that talks
about copying into an INSERT statement was my
attempt to summarize the brief discussion that
occurred on IRC when I went there to talk about
copying into a view.  As well as the design related
thoughts I've had since.

If you could COPY into an INSERT statement then
you could have COPYing into a view be syntactic
sugar for COPYing into a INSERT statement that
inserts into the view.  Then it'd be transparent.
(And that's how my patch actually works now,
it comes up with an INSERT statement and
prepares it and (tries to) execute it in a portal.)



There's been previous discussion of allowing BEFORE INSERT triggers
on views, so long as the triggers always return NULL to suppress
the actual insertion attempt (ie, we'd move the "can't insert into
view" test out of the rewriter and put it downstream of trigger firing
in the executor).  So far no one's figured out how to make that idea
work for UPDATE/DELETE, but maybe you could argue that even if it
only worked for INSERT it'd be a useful feature.  It'd certainly solve
the problem for COPY.


Disclaimer: At least some of that discussion was my fault.

I'd be happy to go in that direction, but arguing for
having BEFORE INSERT triggers work for
INSERT and not on UPDATE and DELETE seems tough.  It's a
little like driving blind in one eye, you can see _partly_
where you're going but not all the way to the end of the road.
I'd be afraid it'd be a bad design decision that would
artificially constrain later work.

Recalling the discussion, plus perhaps a solution:

The problem with UPDATE and DELETE on BEFORE
triggers is coming up with an OLD row.  IIRC it is
possible to take the SELECT associated with the view,
add to it the additional WHERE conditions specified
by the UPDATE and the DELETE, and run the resulting
query to come up with the OLD row(s).

The steps would be something like:

Take the raw parse tree of the UPDATE/DELETE WHERE and stick it
into the raw parse tree of a:
"SELECT * FROM viewname"

Send the result through the rule system and run it.

For each row returned do the UPDATE or DELETE, which
would involve calling the BEFORE INSERT trigger
with the row as OLD.

Check that the BEFORE INSERT returns NULL.

The big problem that came up revolves around what
to do if the view does not contain any/enough
primary keys.  That would make the BEFORE INSERT
trigger pretty useless when it comes to updating
the underlying tables.  I am willing to argue that
the solution to this need not be implemented right
away, along with the rest of the BEFORE INSERT
on views, because there's clear utility when the
views _do_ reveal primary keys.  And, perhaps more
importantly, I think there's at least one clean way
to solve the problem.

The trick would be to allow the definition of
the view to "hide" particular columns
in the normal use of the view, but show them in the
OLD rows of the BEFORE INSERT trigger.  This would
make primary keys or other data available to the
trigger, without having to reveal them to the
regular users of the view. A syntax like:

CREATE VIEW viewname ( [ BEFORE INSERT ] columnname, ... )
 AS query

If "BEFORE INSERT" shows up before "columnname" then
the columnname is not visible in the view, but is visible
in the OLD row of the BEFORE INSERT trigger code.
I'd imagine this means keeping two different query trees
for the view, one with and one without the hidden columns.

Another trouble would be if the view had multiple
columns with the same name, as gotten from
the underlying tables.  The solution there is simple;
don't allow BEFORE INSERT triggers on views unless
the columns of the view have unique names.

To keep things sane it'd probably also be a good
idea to have BEFORE INSERT triggers on views
be mutually exclusive with having INSERT/UPDATE/DELETE
rules on them.  But I've not thought that through.

Karl <[EMAIL PROTECTED]>
Free Software:  "You don't pay back, you pay forward."
 -- Robert A. Heinlein


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] pg_get_tabledef

2007-05-21 Thread Naz Gassiep
Just a question, is there any advantage to having this then building a
function in applications that wrap and use pg_dump with a few options?
Surely that's a more appropriate way to achieve this functionality?
- Naz.


Usama Munir wrote:
> Hi,
>
> i was following a thread some time ago where adding a function
> *pg_get_tabledef* was one of the TODOs for 8.2, but it somehow didn't
> make it to the release perhaps because the functionality was not
> clearly defined? not sure.
>
> Anyway  i happen to come up with a function for pg_get_tabledef which
> works something like following
>
> /postgres=# select pg_get_tabledef(16388) ;
> pg_get_tabledef
> 
>  CREATE TABLE  public.dept
>  (
>   deptno numeric(2,0) NOT NULL,
>   dname  character varying(14),
>   loccharacter varying(13)
>  )
>  WITHOUT OIDS;
> /(1 row)
>
>
>
> i wanted to submit a patch for this, IFF the community wants this
> function. The rationale is obviously to help Application developers
> writing applications like pgAdmin. Currently this part of SQL needs to
> be constructed manually for postgres by the tools.
>
> it is arguable that a table defintion will have constraints , triggers
> etc as well, and they can be added without much problem, but i think
> if a tool needs to construct an SQL for all table related objects then
> functions are already available for them like pg_get_constraintdef,
> pg_get_ruledef, pg_get_indexdef, pg_get_triggerdef etc
>
> i understand that you guys don't like someone to develop a patch in
> isolation and just come up with it one day, but it really came out as
> a by-product of some other work , and i thought you guys might be
> interested in it.
>
> if it is desired,  i will submit  a patch for it, within a day or so.
>
> Regards,
> Usama Munir
> EnterpriseDB (www.enterprisedb.com)
>
>
>

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] [Fwd: PGBuildfarm member vaquita Branch HEAD Status changed from ECPG-Check failure to OK]

2007-05-21 Thread Michael Meskes
On Mon, May 21, 2007 at 10:00:50AM +0100, Dave Page wrote:
> Hi Michael,
> 
> Looks like your latest commit fixed the problem. Thanks for working on it.

Thanks for your help. Just in case anyone wonders, I don't know exactly
which call caused it, but the candidates are only localtime and mktime.

Michael
-- 
Michael Meskes
Email: Michael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)
ICQ: 179140304, AIM/Yahoo: michaelmeskes, Jabber: [EMAIL PROTECTED]
Go SF 49ers! Go Rhein Fire! Use Debian GNU/Linux! Use PostgreSQL!

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Re: [Oledb-dev] double precision error with pg linux server, but not with windows pg server

2007-05-21 Thread Shachar Shemesh
Peter Eisentraut wrote:

> "I want to use the binary format but I don't like what it does."
It doesn't do anything. The set of circumstances under which it is
useful has been limited, on purpose, and, as far as I can see, without
any good reason. Spending not much time (and I suggested to spend it
myself, so it's not like I'm asking anyone to do this for me) this
format can be made to allow transfer of data between installations. Why
wouldn't we want that?

> We have fought many years to get closer to IEEE 754 
> conformance.

Please notice that the format I offered *is* IEEE. In fact, what I'm
offering is to export the binary in IEEE format EVEN IF THE NATIVE
FORMAT ISN'T.

As for the ARM architecture, I've pulled my sources, and the answer is
this: ARM doesn't have one standard floating point format. Different ARM
architectures will use different formats. Most architectures will
actually use IEEE, but some will use decimal based and such. According
to my source (a distributor of ARM based hardware), none of the other
formats will lose precision if translated to IEEE.

So, I will repeat my original question. I can write portable code that
will translate the native format to IEEE (if it's not already the same).
It seems that it will be good enough for all platforms discussed here.
Failing that, we can adopt my later proposal which is IEEE + status for
all places where that is good enough.

Shachar

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Re: [Oledb-dev] double precision error with pg linux server, but not with windows pg server

2007-05-21 Thread Shachar Shemesh
Csaba Nagy wrote:
> If you care about the +/- for +/-Infinity, you must also care about +/-0
> too, so you get the right type of infinity if you divide with 0... so +0
> and -0 are far from being semantically identical.
>
> Cheers,
> Csaba.
>
>   
My suggestion accommodates that.

Shachar

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [HACKERS] Re: [Oledb-dev] double precision error with pg linux server, but not with windows pg server

2007-05-21 Thread Csaba Nagy
> In other words, if I can assure that data exported and then imported
> will always, under all circumstances, compare the same to the original,
> would that be enough of a requirement? In other words, if I offer a
> format that is assured of preserving both mantissa and exponent
> precision and range, as well as all extra attributes (+/-Infinity and
> NaN), but does not guarantee that the semantically identical constructs
> are told apart (+0 vs. -0, and the different NaNs), would that format be
> acceptable?

If you care about the +/- for +/-Infinity, you must also care about +/-0
too, so you get the right type of infinity if you divide with 0... so +0
and -0 are far from being semantically identical.

Cheers,
Csaba.



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] Role members

2007-05-21 Thread Richard Huxton

Akmal Akmalhojaev wrote:

It doesn't mean if there is "inherit" flag. I'm looking for a C function
like SearchSysCache() or smth.


Ah - I think the file you neeed to look in is backend/utils/adt/acl.c - 
I think it's looking at things the other way around. That is - is the 
current role a member of role X rather than list all the roles that are 
members of X.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Re: [Oledb-dev] double precision error with pg linux server, but not with windows pg server

2007-05-21 Thread Peter Eisentraut
Am Montag, 21. Mai 2007 07:04 schrieb Shachar Shemesh:
> In other words, if I offer a
> format that is assured of preserving both mantissa and exponent
> precision and range, as well as all extra attributes (+/-Infinity and
> NaN), but does not guarantee that the semantically identical constructs
> are told apart (+0 vs. -0, and the different NaNs), would that format be
> acceptable?

I don't think so.  We have fought many years to get closer to IEEE 754 
conformance.  I don't think we want to throw that away without a good 
reason.  "I want to use the binary format but I don't like what it does." is 
not a good enough reason in my mind.  For one thing, I'd want to see some 
performance measurements.

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[HACKERS] pg_get_tabledef

2007-05-21 Thread Usama Munir




Hi, 

i was following a thread some time ago where adding a function pg_get_tabledef
was one of the TODOs for 8.2, but it somehow didn't make it to the
release perhaps because the functionality was not clearly defined? not
sure.

Anyway  i happen to come up with a function for pg_get_tabledef which
works something like following

postgres=# select pg_get_tabledef(16388) ;
    pg_get_tabledef 

 CREATE TABLE  public.dept
 (
  deptno numeric(2,0) NOT NULL,
  dname  character varying(14),
  loc    character varying(13)
 )
 WITHOUT OIDS;
(1 row)



i wanted to submit a patch for this, IFF the community wants this
function. The rationale is obviously to help Application developers
writing applications like pgAdmin. Currently this part of SQL needs to
be constructed manually for postgres by the tools.

it is arguable that a table defintion will have constraints , triggers
etc as well, and they can be added without much problem, but i think if
a tool needs to construct an SQL for all table related objects then
functions are already available for them like pg_get_constraintdef,
pg_get_ruledef, pg_get_indexdef, pg_get_triggerdef etc

i understand that you guys don't like someone to develop a patch in
isolation and just come up with it one day, but it really came out as a
by-product of some other work , and i thought you guys might be
interested in it.

if it is desired,  i will submit  a patch for it, within a day or so.

Regards,
Usama Munir
EnterpriseDB (www.enterprisedb.com)







Re: [HACKERS] mb and ecpg regression tests

2007-05-21 Thread Magnus Hagander
Joachim Wieland wrote:
> On Thu, May 17, 2007 at 10:14:45PM +0200, Magnus Hagander wrote:
>> IIRC, Joachim had started working on the ecpg part. (Which is one of the
>> reasons I haven't looked at it myself yet)
> 
>> Joachim, did you ever get anywhere with that?
> 
> I have ecpg tests somehow running under msvc but there are still some issues.
> Here are the problems that I encounter(ed):
> 
> a) ecpg uses a program for each test that runs and generates output. Those
> programs need to be compiled: in MSVC we would either compile them on the
> fly while testing or generate a new project for every single executable.
> Since the second option is overkill, Magnus suggested to try compiling them
> on the fly with the command line compiler. This is ugly but seems to work.

It might also be possible to generate a wildcard msbuild file for them,
perhaps? (If you have Visual Studio 2005, you will have msbuild) Worth
investigating, but this is probably the easiest thing to fix.


> b) ecpg tests use several directories. This causes several problems, one of
> them being the resultmap format to be changed, it currently uses:
> 
> float4/i.86-pc-mingw32=...
> 
> we might want to change the / to some other character here.

Hmm. I guess doable if necessary.


> c) ecpg uses 3 different files per test. stderr, stdout and generated
> source. There is not only just one file to diff against an expected file.
> 
> My approach was to have separate init and test routines for each "module"
> (backend regression tests, ecpg or whatever) and share the rest.
> The init routine sets up stuff and the test routine knows about the specific
> test logic. It generates lists of files to diff. The diffing and reporting
> is kept common of course. A problem here are alternative comparison files.
> We have the resultmap entries and we have "test_n.out".  This has to be
> combined with ecpg having 3 files. We might want to use
> 
> test-platform.stdout instead of test.stdout and
> test_1.stderr instead of test.stderr
> 
> I have not yet sorted out this last item. Proposals welcome. I hope that I
> can fix it on the weekend or at least by the end of next week and then send
> my version to whoever is interested.

Hmm. Probably easier for someone who already knows how the ecpg tests
actually work. I just keep breaking them ;-)

However - do go ahead and send what you have to -patches for comments as
a WIP patch, even if it's not ready. I'd certainly be interested at
looking at what you have so far.

//Magnus

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] Role members

2007-05-21 Thread Richard Huxton

Akmal Akmalhojaev wrote:

Hello! I have the following question.

For example I have a role ID1 with members ID2 and ID3.
Role ID2 has also members ID4 and ID5. It means that roles ID4 and ID5 are
members of ID1.
The question: Is there any function in PostgreSQL, that finds all the
members of role ID1 - even such members, as ID4 and ID5.


Akmal, this isn't really a question for the hackers list. Please try on 
"general" or "admin".


However, I know of no function that will recursively expand member 
lists. You could easily write your own of course, but you'll want to 
decide how to handle the "inherits" flag.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[HACKERS] Role members

2007-05-21 Thread Akmal Akmalhojaev

Hello! I have the following question.

For example I have a role ID1 with members ID2 and ID3.
Role ID2 has also members ID4 and ID5. It means that roles ID4 and ID5 are
members of ID1.
The question: Is there any function in PostgreSQL, that finds all the
members of role ID1 - even such members, as ID4 and ID5.

Thanks.


Re: [HACKERS] [Fwd: PGBuildfarm member vaquita Branch HEAD Status changed from ECPG-Check failure to OK]

2007-05-21 Thread Guillaume Lelarge

Dave Page a écrit :

Hi Michael,

Looks like your latest commit fixed the problem. Thanks for working on it.



But it breaks 8.2 and 8.1 branches. See 
http://www.pgbuildfarm.org/cgi-bin/show_log.pl?nm=kite&dt=2007-05-21%2007:47:01.


I've send a patch on pgsql-patch that fixes it.

Regards.


--
Guillaume.


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


[HACKERS] [Fwd: PGBuildfarm member vaquita Branch HEAD Status changed from ECPG-Check failure to OK]

2007-05-21 Thread Dave Page

Hi Michael,

Looks like your latest commit fixed the problem. Thanks for working on it.

Regards, Dave

 Original Message 
Subject: PGBuildfarm member vaquita Branch HEAD Status changed from 
ECPG-Check failure to OK

Date: Sun, 20 May 2007 14:24:34 -0700 (PDT)
From: PG Build Farm <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED], 
[EMAIL PROTECTED]



The PGBuildfarm member vaquita had the following event on branch HEAD:

Status changed from ECPG-Check failure to OK

The snapshot timestamp for the build that triggered this notification 
is: 2007-05-20 20:00:08


The specs of this machine are:
OS:  Windows Vista Ultimate / 6.0.6000
Arch: i686
Comp: GCC / 3.4.2 (mingw-special)

For more information, see 
http://www.pgbuildfarm.org/cgi-bin/show_history.pl?nm=vaquita&br=HEAD


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly