On 09/02/2013 02:26 PM, Andres Freund wrote:
On 2013-09-02 14:20:57 -0400, Andrew Dunstan wrote:
On 09/02/2013 01:30 PM, Tom Lane wrote:
Andrew Dunstan writes:
Yes, possibly, but we can't do that now, but I would like to fix the
docs now.
If you want this in 9.3.0 it needs to be committed i
On 2013-09-02 14:20:57 -0400, Andrew Dunstan wrote:
>
> On 09/02/2013 01:30 PM, Tom Lane wrote:
> >Andrew Dunstan writes:
> >>Yes, possibly, but we can't do that now, but I would like to fix the
> >>docs now.
> >If you want this in 9.3.0 it needs to be committed in the next couple of
> >hours.
>
On 09/02/2013 01:30 PM, Tom Lane wrote:
Andrew Dunstan writes:
Yes, possibly, but we can't do that now, but I would like to fix the
docs now.
If you want this in 9.3.0 it needs to be committed in the next couple of
hours.
FWIW, the idea seemed generally sane to me, but I'd suggest not depend
Andrew Dunstan writes:
> Yes, possibly, but we can't do that now, but I would like to fix the
> docs now.
If you want this in 9.3.0 it needs to be committed in the next couple of
hours.
FWIW, the idea seemed generally sane to me, but I'd suggest not depending
on reltoastrelid being zero when an
On 09/01/2013 10:33 PM, Josh Berkus wrote:
Maybe for bonus points we'd print out the schema (e.g. by selectting
c.oid::regclass instead of c.relname), and also include materialized
views which are omitted from the query altogether.
Given the importance of this, maybe we need to have it as part
> Maybe for bonus points we'd print out the schema (e.g. by selectting
> c.oid::regclass instead of c.relname), and also include materialized
> views which are omitted from the query altogether.
Given the importance of this, maybe we need to have it as part of
pg_stat_user_tables?
--
Josh Berk
The other day I followed the docs and ran this query:
SELECT relname, age(relfrozenxid) FROM pg_class WHERE relkind = 'r';
Then after identifying some tables that were close to triggering an
automatic vacuum, we vacuumed them only to find that the age of the
database's datfrozenxid hadn't
Tatsuo Ishii <[EMAIL PROTECTED]> writes:
> This is used for calculating the upper limit of -c option. However
> actual limit is coming from the number of descriptors that select(2)
> can watch (besides the number of file descriptors allowed by the
> kernal. This is different story though, I think).
I found following in pgbench.c:
#define MAXCLIENTS 1024 /* max number of clients allowed */
This is used for calculating the upper limit of -c option. However
actual limit is coming from the number of descriptors that select(2)
can watch (besides the number of file descriptors all
"Jim C. Nasby" <[EMAIL PROTECTED]> writes:
> Could you not just scan right and see what the first key was? Thought
> granted, that means there's a chance of a wasted page scan, but I think
> that'd be somewhat of a corner case, so it might not be bad.
No, because (a) that confuses the first key th
On Wed, Jul 19, 2006 at 06:23:44PM -0400, Tom Lane wrote:
> "Jim C. Nasby" <[EMAIL PROTECTED]> writes:
> > On Tue, Jul 11, 2006 at 10:02:49AM -0400, Tom Lane wrote:
> >> 1. In a non-rightmost page, we need to include a "high key", or page
> >> boundary key, that isn't one of the useful data keys.
>
"Jim C. Nasby" <[EMAIL PROTECTED]> writes:
> On Tue, Jul 11, 2006 at 10:02:49AM -0400, Tom Lane wrote:
>> 1. In a non-rightmost page, we need to include a "high key", or page
>> boundary key, that isn't one of the useful data keys.
> Why does a leaf page need a boundary key?
So you can tell whet
On Tue, Jul 11, 2006 at 10:02:49AM -0400, Tom Lane wrote:
> Currently, we restrict btree index tuples to a size that ensures three of
> them will fit on a page. The motivation for this is the following two
> considerations:
>
> 1. In a non-rightmost page, we need to include a "high key", or page
Ühel kenal päeval, T, 2006-07-11 kell 10:46, kirjutas Josh Berkus:
> Tom,
>
> > Obviously a tree containing many such pages would be awfully inefficient
> > to search, but I think a more common case is that there are a few wide
> > entries in an index of mostly short entries, and so pushing the ha
Tom,
Obviously a tree containing many such pages would be awfully inefficient
to search, but I think a more common case is that there are a few wide
entries in an index of mostly short entries, and so pushing the hard
limit up a little would add some flexibility with little performance
cost in r
Currently, we restrict btree index tuples to a size that ensures three of
them will fit on a page. The motivation for this is the following two
considerations:
1. In a non-rightmost page, we need to include a "high key", or page
boundary key, that isn't one of the useful data keys.
2. In a non-l
On Friday 26 May 2006 09:45, Gurjeet Singh wrote:
> Something not supported should be stated as such through an ERROR. Except
> for count(), none of the following make any sense:
>
> The transcipt:
>
> test=# \d t1
> Table "public.t1"
> Column | Type | Modifiers
> +-+--
"Jim C. Nasby" <[EMAIL PROTECTED]> writes:
> My concern is that it's not inconceiveable to typo max(field) into
> max(*), which could make for a rather frustrating error. Not to mention
> this being something that could trip newbies up. If nothing else I'd say
> it warrants a %TODO just so it doesn
On Fri, May 26, 2006 at 14:06:29 -0500,
"Jim C. Nasby" <[EMAIL PROTECTED]> wrote:
>
> But if aggregate(*) just gets turned into aggregate(1) by the backend,
> why not just tell people to use aggregate(1) for their custom
> aggregates? Or am I misunderstanding how aggregate(*) is actually
> handl
On Fri, May 26, 2006 at 11:03:17AM -0400, Tom Lane wrote:
> Simon Riggs <[EMAIL PROTECTED]> writes:
> > On Fri, 2006-05-26 at 10:22 +0200, Dennis Bjorklund wrote:
> >> Shouldn't
> >> SELECT max(*) FROM foo;
> >> give an error?
>
> > Doesn't seem an important or even useful extension of the standa
Simon Riggs <[EMAIL PROTECTED]> writes:
> On Fri, 2006-05-26 at 10:22 +0200, Dennis Bjorklund wrote:
>> Shouldn't
>> SELECT max(*) FROM foo;
>> give an error?
> Doesn't seem an important or even useful extension of the standard, but
> would probably require special case processing for every aggre
On 5/26/06, Simon Riggs <[EMAIL PROTECTED]> wrote:> On Fri, 2006-05-26 at 10:22 +0200, Dennis Bjorklund wrote:
> > Shouldn't> >> >SELECT max(*) FROM foo;
> >> > give an error?IMO, yes.> > SQL:2003 would not allow this; SQL:2003 permits only COUNT(*) and no> other aggregate function. All other a
On Fri, 2006-05-26 at 10:22 +0200, Dennis Bjorklund wrote:
> Shouldn't
>
>SELECT max(*) FROM foo;
>
> give an error?
SQL:2003 would not allow this; SQL:2003 permits only COUNT(*) and no
other aggregate function. All other aggregates require a value
expression.
> Instead it's executed like
Shouldn't
SELECT max(*) FROM foo;
give an error? Instead it's executed like
SELECT max(1) FROM foo;
Just like count(*) is executed as count(1).
Something for the TODO or is it a feature?
ps. I know it's not an important case since no one sane would try to
calculate max(*), but still.
I think a summary of where the discussion went might be helpful
(especially for me after a week or so away doing perl).
There were a number of approaches suggested, which I will attempt to
summarize in a hand wavy fashion - (apologies for any misrepresentation
caused):
i) Rewrite max/min que
On 15 Nov 2004 02:00:37 -0500, Greg Stark <[EMAIL PROTECTED]> wrote:
> I think people should get away from thinking about "order by + limit". That
> isn't going to work for anything with a GROUP BY. And it isn't going to work
> for anything more complex than a single min() or max().
>
> min() only
Jan Wieck <[EMAIL PROTECTED]> writes:
> Both cases can be expressed with order by + limit queries, that would indeed
> utilize those indexes. But what's been discussed so far does not cover any of
> them.
I think people should get away from thinking about "order by + limit". That
isn't going to
On 11/10/2004 11:57 PM, Mark Kirkwood wrote:
Your example and ones like :
SELECT max(foo), count(foo) FROM bar
SELECT max(a.foo1), max(b.foo2) FROM bar1 AS a NATURAL JOIN bar2 AS b
have made me realize that the scope of "what should be optimized" is
somewhat subtle.
I am inclined to keep it simpl
On Thu, Nov 11, 2004 at 11:59:06AM -0600, Bruno Wolff III wrote:
> On Thu, Nov 11, 2004 at 09:29:14 +,
> Simon Riggs <[EMAIL PROTECTED]> wrote:
> > On Wed, 2004-11-10 at 22:48, Mark Kirkwood wrote:
> > > Planning for future note: I would like whatever mechanism that is added
> > > for this M
On Thu, Nov 11, 2004 at 08:00:01AM -0600, Bruno Wolff III wrote:
> On Thu, Nov 11, 2004 at 17:52:19 +1100,
> John Hansen <[EMAIL PROTECTED]> wrote:
> > Why not just change the function all together to 'select $1 from $2
> > order by $1 desc limit 1;'
> >
> > Is there ANY situation where max(col)
Tom Lane <[EMAIL PROTECTED]> writes:
> Greg Stark <[EMAIL PROTECTED]> writes:
> > Tom Lane <[EMAIL PROTECTED]> writes:
> >> Oh? How is a first() aggregate going to know what sort order you want
> >> within the group?
>
> > It would look something like
>
> > select x,first(a),first(b) from (sel
Greg Stark <[EMAIL PROTECTED]> writes:
> Tom Lane <[EMAIL PROTECTED]> writes:
>> Oh? How is a first() aggregate going to know what sort order you want
>> within the group?
> It would look something like
> select x,first(a),first(b) from (select x,a,b from table order by x,y) group
> by x
> whi
Tom Lane <[EMAIL PROTECTED]> writes:
> Greg Stark <[EMAIL PROTECTED]> writes:
> > It would also make it possible to deprecate DISTINCT ON in favour of GROUP
> > BY
> > with first() calls.
>
> Oh? How is a first() aggregate going to know what sort order you want
> within the group? AFAICS first
Alvaro Herrera <[EMAIL PROTECTED]> writes:
> Re: knowing internal representation, I think this is required anyway;
> else the optimization would only work on a very limited numer of
> situations.
The point of my remark is that pushing this knowledge out to a function
is helpful only if you can put
Greg Stark <[EMAIL PROTECTED]> writes:
> It would also make it possible to deprecate DISTINCT ON in favour of GROUP BY
> with first() calls.
Oh? How is a first() aggregate going to know what sort order you want
within the group? AFAICS first() is only useful when you honestly do
not care which g
On Thu, Nov 11, 2004 at 09:29:14 +,
Simon Riggs <[EMAIL PROTECTED]> wrote:
> On Wed, 2004-11-10 at 22:48, Mark Kirkwood wrote:
> > Planning for future note: I would like whatever mechanism that is added
> > for this MAX/MIN stuff to be amenable to more subtle things like
> > aggregate navig
On Thu, Nov 11, 2004 at 10:24:34 -0500,
Tom Lane <[EMAIL PROTECTED]> wrote:
>
> We know how to determine that an index matches an ORDER BY clause.
> But what has an aggregate called MAX() got to do with ORDER BY? Magic
> assumptions about operators named "<" are not acceptable answers; there
>
Tom Lane <[EMAIL PROTECTED]> writes:
> As a real-world example of why I won't hold still for hard-wiring this:
> a complex-number data type might have btree opclasses allowing it to be
> sorted either by real part or by absolute value. One might then define
> max_real() and max_abs() aggregates
Bruno Wolff III <[EMAIL PROTECTED]> writes:
> Tom Lane <[EMAIL PROTECTED]> wrote:
>> We know how to determine that an index matches an ORDER BY clause.
>> But what has an aggregate called MAX() got to do with ORDER BY?
> Wouldn't knowing an opclass and direction associated with an aggregrate
> f
Simon Riggs <[EMAIL PROTECTED]> writes:
> On Wed, 2004-11-10 at 22:48, Mark Kirkwood wrote:
> > Planning for future note: I would like whatever mechanism that is added
> > for this MAX/MIN stuff to be amenable to more subtle things like
> > aggregate navigation (see R.Kimball's article
> > http
On Thu, Nov 11, 2004 at 17:52:19 +1100,
John Hansen <[EMAIL PROTECTED]> wrote:
> Why not just change the function all together to 'select $1 from $2
> order by $1 desc limit 1;'
>
> Is there ANY situation where max(col) as it is, would be faster?
Yes. A couple I can think of are:
When count(col
> How are you planning to represent the association between MIN/MAX and
> particular index orderings in the system catalogs?
Don't we already have that info to decide whether an index handles
an "ORDER BY" without a sort node ?
Andreas
---(end of broadcast)-
On Thu, Nov 11, 2004 at 01:18:05 -0600,
"Jim C. Nasby" <[EMAIL PROTECTED]> wrote:
> Certainly handling only one case is better than none. I just wanted to
> bring up the multiple aggregate scenario. Also, consider that
>
> SELECT min(a), max(a), min(b), max(c) FROM table
>
> could be optimized
On Thu, Nov 11, 2004 at 01:08:39AM -0500, Tom Lane wrote:
> Alvaro Herrera <[EMAIL PROTECTED]> writes:
> > What about having a new column in pg_aggregate which would point to a
> > function that would try to optimize the aggregate's handling?
>
> I can't get very excited about this, because how wo
On Wed, 2004-11-10 at 22:48, Mark Kirkwood wrote:
> Planning for future note: I would like whatever mechanism that is added
> for this MAX/MIN stuff to be amenable to more subtle things like
> aggregate navigation (see R.Kimball's article
> http://www.dbmsmag.com/9608d54.html).
>
With you on t
"Zeugswetter Andreas DAZ SD" <[EMAIL PROTECTED]> writes:
>> How are you planning to represent the association between MIN/MAX and
>> particular index orderings in the system catalogs?
> Don't we already have that info to decide whether an index handles
> an "ORDER BY" without a sort node ?
We kn
Bruno Wolff III <[EMAIL PROTECTED]> writes:
> I don't think you should be rewriting queries as much as providing
> alternate plans and letting the rest of the optimizer decided which
> plan to use. If you just rewrite a query you might lock yourself into
> using a poor plan.
Moreover, none of th
Probably for a small table, where the machinery of reading the index,
followed by checking the table for non-visible tuples is more costly
than just scanning the table!
regards
Mark
John Hansen wrote:
Why not just change the function all together to 'select $1 from $2
order by $1 desc limit 1;'
There seems to be (as Tom indicated) a choice of approaches:
i) rewrite max/min querys and then plan 'em
ii) provide alternate plans based on presence of certain aggregate types
in the query
when I first examined this TODO item, I was really thinking about i),
but I suspect that ii) is probably
Certainly handling only one case is better than none. I just wanted to
bring up the multiple aggregate scenario. Also, consider that
SELECT min(a), max(a), min(b), max(c) FROM table
could be optimized as well (into 4 index scans, assuming a, b, and c all
had indexes).
I don't think any other agg
Why not just change the function all together to 'select $1 from $2
order by $1 desc limit 1;'
Is there ANY situation where max(col) as it is, would be faster?
... John
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL
Alvaro Herrera <[EMAIL PROTECTED]> writes:
> What about having a new column in pg_aggregate which would point to a
> function that would try to optimize the aggregate's handling?
I can't get very excited about this, because how would you make a
reasonably stable/narrow API for such a thing? The f
On Thu, Nov 11, 2004 at 17:57:42 +1300,
Mark Kirkwood <[EMAIL PROTECTED]> wrote:
> Your example and ones like :
>
> SELECT max(foo), count(foo) FROM bar
> SELECT max(a.foo1), max(b.foo2) FROM bar1 AS a NATURAL JOIN bar2 AS b
>
> have made me realize that the scope of "what should be optimized"
Your example and ones like :
SELECT max(foo), count(foo) FROM bar
SELECT max(a.foo1), max(b.foo2) FROM bar1 AS a NATURAL JOIN bar2 AS b
have made me realize that the scope of "what should be optimized" is
somewhat subtle.
I am inclined to keep it simple (i.e rather limited) for a first cut,
and
On Thu, Nov 11, 2004 at 11:48:49AM +1300, Mark Kirkwood wrote:
> I am looking at implementing this TODO item. e.g. (max case):
>
> rewrite
> SELECT max(foo) FROM bar
> as
> SELECT foo FROM bar ORDER BY foo DESC LIMIT 1
> if there is an index on bar(foo)
Out of curiosity, will you be doing this i
On Wed, Nov 10, 2004 at 22:21:31 -0300,
Alvaro Herrera <[EMAIL PROTECTED]> wrote:
> On Wed, Nov 10, 2004 at 07:18:59PM -0500, Tom Lane wrote:
>
> > A more radical way of handling it would be to detect the relevance of an
> > indexscan in indxpath.c and generate a special kind of Path node; this
Tom Lane wrote:
A more radical way of handling it would be to detect the relevance of an
indexscan in indxpath.c and generate a special kind of Path node; this
would not generalize to other sorts of things as you were hoping, but
I'm unconvinced that the mechanism is going to be very general-purpos
On Wed, Nov 10, 2004 at 07:18:59PM -0500, Tom Lane wrote:
> A more radical way of handling it would be to detect the relevance of an
> indexscan in indxpath.c and generate a special kind of Path node; this
> would not generalize to other sorts of things as you were hoping, but
> I'm unconvinced th
Mark Kirkwood <[EMAIL PROTECTED]> writes:
> I am looking at implementing this TODO item. e.g. (max case):
> My initial thoughts revolved around extending the existing RULE system
> to be able to handle more general types of rewrite - like conditionals
> in SELECT rules and rewrites that change e
I am looking at implementing this TODO item. e.g. (max case):
rewrite
SELECT max(foo) FROM bar
as
SELECT foo FROM bar ORDER BY foo DESC LIMIT 1
if there is an index on bar(foo)
Suggestions about the most suitable point in the parser/planner stage to
perform this sort of rewrite would be most welco
Greg Stark <[EMAIL PROTECTED]> writes:
> Stephan Szabo <[EMAIL PROTECTED]> writes:
>> I'd expect copy to be a single command, no matter how many rows were
>> copied.
> It might prevent you from using pg_dump --inserts ?
Not even that, unless you *also* modified the dump output to wrap
BEGIN/END a
Clinging to sanity, [EMAIL PROTECTED] (ow) mumbled into her beard:
> --- Tom Lane <[EMAIL PROTECTED]> wrote:
>> > Wouldn't the above put a limit on a number of records one could have
>> > in table?
>>
>> No.
>
> If I understand correctly, a table that has 4B+ rows cannot be restored after
> the d
--- Tom Lane <[EMAIL PROTECTED]> wrote:
> ow <[EMAIL PROTECTED]> writes:
> > If I understand correctly, a table that has 4B+ rows cannot be restored
> after
> > the dump and that, in turn, may/will affect the ability to upgrade to new
> > versions of pgSql.
>
> You don't understand correctly.
I
ow <[EMAIL PROTECTED]> writes:
> If I understand correctly, a table that has 4B+ rows cannot be restored after
> the dump and that, in turn, may/will affect the ability to upgrade to new
> versions of pgSql.
You don't understand correctly.
regards, tom lane
--
Stephan Szabo <[EMAIL PROTECTED]> writes:
> I'd expect copy to be a single command, no matter how many rows were
> copied.
It might prevent you from using pg_dump --inserts ?
--
greg
---(end of broadcast)---
TIP 5: Have you checked our extensiv
On Mon, 1 Dec 2003, ow wrote:
> > --- Tom Lane <[EMAIL PROTECTED]> wrote:
> > Actually you can only have 4 billion SQL commands per xid, because the
> > CommandId datatype is also just 32 bits. I've never heard of anyone
> > running into that limit, though.
> >
>
> Wouldn't the above put a limit
--- Tom Lane <[EMAIL PROTECTED]> wrote:
> > Wouldn't the above put a limit on a number of records one could have
> > in table?
>
> No.
If I understand correctly, a table that has 4B+ rows cannot be restored after
the dump and that, in turn, may/will affect the ability to upgrade to new
versions
ow <[EMAIL PROTECTED]> writes:
>> --- Tom Lane <[EMAIL PROTECTED]> wrote:
>> Actually you can only have 4 billion SQL commands per xid, because the
>> CommandId datatype is also just 32 bits. I've never heard of anyone
>> running into that limit, though.
> Wouldn't the above put a limit on a numb
> --- Tom Lane <[EMAIL PROTECTED]> wrote:
> Actually you can only have 4 billion SQL commands per xid, because the
> CommandId datatype is also just 32 bits. I've never heard of anyone
> running into that limit, though.
>
Wouldn't the above put a limit on a number of records one could have in ta
Okay, based on 5k records and a "couple of wallclock minutes" being equal
to ~120sec, you are getting 41 inserts/sec?
On Wed, 6 Jun 2001, KuroiNeko wrote:
> > Build a file with 100k INSERT statements in it, and run psql -f on that
> > file ... no BEGIN/END in the file, just straight INSERTs
> Build a file with 100k INSERT statements in it, and run psql -f on that
> file ... no BEGIN/END in the file, just straight INSERTs ... what is the
> max throughput ppl can see?
H Depends. Inserting up to 5,000 of Apache log records takes no
longer than a couple of wallclock minut
Morning all ...
Build a file with 100k INSERT statements in it, and run psql -f on
that file ... no BEGIN/END in the file, just straight INSERTs ... what is
the max throughput ppl can see?
I'm seeing reports of it maxing out on an AIX around 450, and on
an HP around 380 ... anyo
72 matches
Mail list logo