On Aug 1, 2006, at 16:15 , Harald Armin Massa wrote:
As accepting 2006-02-31 as a valid date would require brainwashing
at least the entire core team, we should find a recommended path
of date migration from different universes.
Have you checked out the mysql2pgsql[1] or my2postgres
I've found a problem with the VALUES-as-RTE approach:
regression=# create table src(f1 int, f2 int);
CREATE TABLE
regression=# create table log(f1 int, f2 int, tag text);
CREATE TABLE
regression=# insert into src values(1,2);
INSERT 0 1
regression=# create rule r2 as on update to src do
Tom Lane wrote:
I've found a problem with the VALUES-as-RTE approach:
regression=# create table src(f1 int, f2 int);
CREATE TABLE
regression=# create table log(f1 int, f2 int, tag text);
CREATE TABLE
regression=# insert into src values(1,2);
INSERT 0 1
regression=# create rule r2 as on
Alvaro Herrera [EMAIL PROTECTED] writes:
Does it work if you do
regression=# create rule r2 as on update to src do
regression-# insert into log values(old.f1, old.f2, 'old'), (new.f1, new.f2,
'new');
No, that's not the problem. * expansion works just fine here, it's
the executor that can't
Joe Conway [EMAIL PROTECTED] writes:
Tom Lane wrote:
What I'm inclined to do for 8.2 is to disallow OLD/NEW references in
multi-element VALUES clauses; the feature is still tremendously useful
without that.
Given the timing, this sounds like a reasonable approach. I agree that
the feature
I wrote:
I still dislike the way you're doing things in the executor though.
I don't see the point of using the execScan.c machinery; most of the
time that'll be useless overhead. As I said before, I think the right
direction here is to split Result into two single-purpose node types
and
Tom Lane wrote:
So what I'm currently thinking is
1. Implement ValuesScan.
2. Convert all existing uses of Result without a child node into
ValuesScan.
3. Rename Result to Filter and rip out whatever code is only used for
the no-child-node case.
Steps 2 and 3 are just in the nature of
Joe Conway [EMAIL PROTECTED] writes:
One of the things I'm struggling with is lack of column aliases. Would
it be reasonable to require something like this?
SELECT ... FROM (VALUES ...) AS foo(col1, col2, ...)
Requiring column aliases is counter to spec ...
The other issue is how to
Joe Conway [EMAIL PROTECTED] writes:
Tom Lane wrote:
As for the types, I believe that the spec pretty much dictates that we
apply the same type resolution algorithm as for a UNION.
Where do I find that algorithm -- somewhere in nodeAppend.c?
select_common_type(), in the parser.
On Thu, Jul 20, 2006 at 08:46:13PM -0400, Tom Lane wrote:
Joe Conway [EMAIL PROTECTED] writes:
I'm liking this too. But when you say jointree node, are you saying to
model the new node type after NestLoop/MergeJoin/HashJoin nodes? These
are referred to as join nodes in ExecInitNode. Or as
Tom Lane wrote:
Joe Conway [EMAIL PROTECTED] writes:
I was actually just looking at that and ended up thinking that it might
be better to deal with it one level down in ExecProject (because it is
already passing targetlists directly to ExecTargetList).
I'd vote against that, because (a)
Joe Conway [EMAIL PROTECTED] writes:
I'm liking this too. But when you say jointree node, are you saying to
model the new node type after NestLoop/MergeJoin/HashJoin nodes? These
are referred to as join nodes in ExecInitNode. Or as you mentioned a
couple of times, should this look more like
Tom Lane wrote:
No, I guess I confused you by talking about the executor representation
at the same time. This is really unrelated to the executor. The join
tree I'm thinking of here is the data structure that dangles off
Query.jointree --- it's a representation of the query's FROM clause,
and
Joe Conway wrote:
Tom Lane wrote:
Christopher Kings-Lynne [EMAIL PROTECTED] writes:
Strange. Last time I checked I thought MySQL dump used 'multivalue
lists in inserts' for dumps, for the same reason that we use COPY
I think Andrew identified the critical point upthread: they don't try
to
Joe Conway [EMAIL PROTECTED] writes:
I did some testing today against mysql and found that it will easily
absorb insert statements with 1 million targetlists provided you set
max_allowed_packet high enough for the server. It peaked out at about
600MB, compared to my test similar last night
Tom Lane wrote:
Joe Conway [EMAIL PROTECTED] writes:
I did some testing today against mysql and found that it will easily
absorb insert statements with 1 million targetlists provided you set
max_allowed_packet high enough for the server. It peaked out at about
600MB, compared to my test
Joe Conway [EMAIL PROTECTED] writes:
The difficulty is finding a way to avoid all that extra work without a
very ugly special case kludge just for inserts.
[ thinks a bit ... ]
It seems to me that the reason it's painful is exactly that INSERT
... VALUES is a kluge already. We've
Tom Lane wrote:
Joe Conway [EMAIL PROTECTED] writes:
The difficulty is finding a way to avoid all that extra work without a
very ugly special case kludge just for inserts.
[ thinks a bit ... ]
It seems to me that the reason it's painful is exactly that INSERT
... VALUES is a kluge already.
Joe Conway [EMAIL PROTECTED] writes:
Tom Lane wrote:
I think the place we'd ultimately like to get to involves changing the
executor's Result node type to have a list of targetlists and sequence
through those lists to produce its results
I was actually just looking at that and ended up
If the use case is people running MySQL dumps, then there will be
millions of values-targetlists in MySQL dumps.
I did some experimentation just now, and could not get mysql to accept a
command longer than about 1 million bytes. It complains about
Got a packet bigger than
from http://dev.mysql.com/doc/refman/4.1/en/blob.html
You can change the message buffer size by changing the value of the
max_allowed_packet variable, but you must do so for both the server and
your client program. For example, both mysql and mysqldump allow you to
change the client-side
On Tue, Jul 18, 2006 at 02:19:01PM -0400 I heard the voice of
Tom Lane, and lo! it spake thus:
I did some experimentation just now, and could not get mysql to accept a
command longer than about 1 million bytes. It complains about
Got a packet bigger than 'max_allowed_packet' bytes
I did some experimentation just now, and could not get mysql to accept a
command longer than about 1 million bytes. It complains about
Got a packet bigger than 'max_allowed_packet' bytes
which seems a bit odd because max_allowed_packet is allegedly set to
16 million, but anyway I don't think
I did some experimentation just now, and could not get mysql to accept a
command longer than about 1 million bytes. It complains about
Got a packet bigger than 'max_allowed_packet' bytes
which seems a bit odd because max_allowed_packet is allegedly set to
16 million, but anyway I don't think
Christopher Kings-Lynne [EMAIL PROTECTED] writes:
Strange. Last time I checked I thought MySQL dump used 'multivalue
lists in inserts' for dumps, for the same reason that we use COPY
I think Andrew identified the critical point upthread: they don't try
to put an unlimited number of rows into
25 matches
Mail list logo