On Tue, Jan 08, 2008 at 05:33:51PM -0500, Merlin Moncure wrote:
Here is a short example which demonstrates some of the major features.
There are many other examples and discussions of minutia in the
documentation.
I havn't looked at the source but FWIW I think it's an awesome idea.
Have a
Tom Lane wrote:
Joe Conway [EMAIL PROTECTED] writes:
Did you want me to work on this? I could probably put some time into it
this coming weekend.
I'll try to get to it before that --- if no serious bugs come up this
week, core is thinking of wrapping 8.3.0 at the end of the week, so
So...in the vein of my last mail, I have tried to create another patch
for refactoring out some of the HAVE_INT64_TIMESTAMP ifdefs in the
code in timestamp.c. I have attached the patch. Please let me know if
this patch is acceptable and what I can do to continue this effort.
Thanks,
wt
From
Sorry for previous message having no comments.
Just remark:
These aggregates created successfuly both in 8.2 and 8.3beta4:
CREATE AGGREGATE array_concat(anyarray) (
SFUNC=array_cat,
STYPE=anyarray
);
CREATE AGGREGATE array_build(anyelement) (
SFUNC=array_append,
STYPE=anyarray
);
But
I suggest one more standard date/time operator, to divide one interval
by another with numeric (or float, for example) result.
I.e. something like that:
database=# SELECT '5400 seconds'::interval / '1 hour'::interval;
?column?
--
1.5
(1 row)
Ilya A. Kovalenko
On Sat, 2008-01-05 at 12:09 +, Simon Riggs wrote:
On Fri, 2008-01-04 at 17:28 +0900, Fujii Masao wrote:
Simon Riggs wrote:
My original one line change described on bug 3843 seems like the best
solution for 8.3.
+1
Is this change in time for RC1?
Patch attached.
Not
Hi,
May be i am reposting something which has been discussed to end in this
forum. I have made a search in the archives and i couldn't find any
immediately.
With my relatively small experience in Performance Testing and Tuning,
one of the rules of thumb for getting Performance is Don't do
Thanks for the explanation on the ulimits; I can see how that could turn
out a problem in some cases.
Following Tom's suggestion, here is the startup script I used:
#!/bin/sh
ulimit -a $PGHOST/server.ulimit
pg_ctl start -l $PGHOST/server.log
The ulimits seem to be the same, though:
$ cat
Hi,
Gokulakannan Somasundaram wrote:
If we can ask the Vacuum process to scan
the WAL log, it can get all the relevant details on where it needs to
go.
You seem to be assuming that only few tuples have changed between
vacuums, so that WAL could quickly guide the VACUUM processes to the
On Sat, 2008-01-05 at 16:30 -0500, Robert Treat wrote:
I'm not following this. If we can work out a scheme, I see no reason not to
allow a single table to span multiple tablespaces.
That seems to be something we might want anyway, so yes.
The difference is that, if I currently have a
Hi,
I'm trying to run 'make check' on a 64bit Debian unstable. That aborts
after 60 seconds due to not being able to connect to the postmaster.
I figured that there's nothing wrong with the postmaster, rather psql
can't start up, because it gets linked against an older libpq.so.5. It
looks
Markus Schiltknecht wrote:
Hi,
I'm trying to run 'make check' on a 64bit Debian unstable. That aborts
after 60 seconds due to not being able to connect to the postmaster.
I figured that there's nothing wrong with the postmaster, rather psql
can't start up, because it gets linked against
am Wed, dem 09.01.2008, um 17:33:00 +0700 mailte Ilya A. Kovalenko folgendes:
I suggest one more standard date/time operator, to divide one interval
by another with numeric (or float, for example) result.
I.e. something like that:
database=# SELECT '5400 seconds'::interval / '1
Andrew Dunstan wrote:
Smells suspiciously like an rpath problem to me. What are your configure
settings?
Ah, yeah, I see. Using something else than --prefix=/usr helped.
Thanks for the hint!
Regards
Markus
---(end of broadcast)---
TIP 5:
On Sun, 2008-01-06 at 11:39 +0100, Markus Schiltknecht wrote:
I think this has to do with SE not being of much use for index scans.
Hmmm. I think it fits rather neatly with BitmapIndexScans. It would be
easy to apply the index condition and/or filters to see which segments
are excluded and
On Mon, 2008-01-07 at 14:20 +0100, Markus Schiltknecht wrote:
AFAIUI, Segment Exclusion combines perfectly well with
clustering.
Yes, seems like it would be possible to have a segment-aware CLUSTER, so
it was actually usable on large tables. Not planning that initially
though.
--
Simon
So it's easily possible having more dead tuples, than live ones. In such
cases, scanning the WAL can easily takes *longer* than scanning the
table, because the amount of WAL to read would be bigger.
Yes... i made a wrong assumption there.. so the idea is totally
useless.
Thanks,
Gokul.
Simon Riggs wrote:
Hmmm. I think it fits rather neatly with BitmapIndexScans. It would be
easy to apply the index condition and/or filters to see which segments
are excluded and then turn off bits in the bitmap appropriately.
Yeah, good point.
Not fully sure about IndexScans yet. I don't
On Mon, 2008-01-07 at 12:14 +0100, Csaba Nagy wrote:
On Wed, 2008-01-02 at 17:56 +, Simon Riggs wrote:
Like it?
Very cool :-)
Thanks. As ever, a distillation of various thoughts, not all mine.
One additional thought: what about a kind of segment fill factor ?
Meaning: each segment
On Wed, 2008-01-09 at 00:22 -0500, Tom Lane wrote:
pgsql-core wasted quite a lot of time
Core's efforts are appreciated by all, so not time wasted.
--
Simon Riggs
2ndQuadrant http://www.2ndQuadrant.com
---(end of broadcast)---
TIP 6:
On Sat, 2008-01-05 at 16:42 +0100, Markus Schiltknecht wrote:
Simon Riggs wrote:
On Fri, 2008-01-04 at 22:26 +0100, Markus Schiltknecht wrote:
I'm still puzzled about how a DBA is expected to figure out which
segments to mark. Simon, are you assuming we are going to pass on
On Wed, 2008-01-09 at 02:25 +, Gregory Stark wrote:
Markus Schiltknecht [EMAIL PROTECTED] writes:
There are two very distinct ways to handle partitioning. For now, I'm
calling
them named and unnamed partitioning.
The naming is precisely the useful part in that it is how the DBA
Gavin and all,
This is quite a long reply, so apologies for that.
On Wed, 2008-01-09 at 07:28 +0100, Gavin Sherry wrote:
On Wed, Jan 02, 2008 at 05:56:14PM +, Simon Riggs wrote:
This technique would be useful for any table with historical data keyed
by date or timestamp. It would also
Markus Schiltknecht [EMAIL PROTECTED] writes:
Hi,
Gokulakannan Somasundaram wrote:
If we can ask the Vacuum process to scan the WAL log, it can get all the
relevant details on where it needs to go.
That's an interesting thought. I think your caveats are right but with some
more work it
Simon Riggs [EMAIL PROTECTED] writes:
Not sure why this hasn't being applied yet for 8.3
Because it doesn't fix the problem ... which is that the postmaster
kills the archiver (and the stats collector too) at what is now the
wrong point in the shutdown sequence.
Simon Riggs wrote:
I have to admit I always found it kludgy to have objects named
invoices_2000_JAN and invoices_2000_FEB and so on. It's kind of an meta
denormalization. But so is specifying where clauses repeatedly.
The idea for using the WHERE clauses was to specifically avoid naming.
I
Ilya A. Kovalenko [EMAIL PROTECTED] writes:
I suggest one more standard date/time operator, to divide one interval
by another with numeric (or float, for example) result.
You'd have to define exactly what that means, which seems a little
tricky for incommensurate intervals. For instance what
On Wed, 2008-01-09 at 10:15 -0500, Tom Lane wrote:
Simon Riggs [EMAIL PROTECTED] writes:
Not sure why this hasn't being applied yet for 8.3
Because it doesn't fix the problem ... which is that the postmaster
kills the archiver (and the stats collector too) at what is now the
wrong point in
On Wed, 2008-01-09 at 16:20 +0100, Markus Schiltknecht wrote:
Simon Riggs wrote:
I have to admit I always found it kludgy to have objects named
invoices_2000_JAN and invoices_2000_FEB and so on. It's kind of an meta
denormalization. But so is specifying where clauses repeatedly.
The
On Wed, 2008-01-09 at 15:53 +, Gregory Stark wrote:
Simon Riggs [EMAIL PROTECTED] writes:
Perhaps a good analogy is indexes. Index names are themselves kind of
redundant and people usually use names which encode up most of the information
of the definition.
But the reason you need
On Wed, 2008-01-09 at 15:10 +, Gregory Stark wrote:
The goal should be to improve vacuum, then
adjust the autovacuum_scale_factor as low as we can. As vacuum gets
cheaper the scale factor can go lower and lower. We shouldn't allow
the existing autovacuum behaviour to control the way
Simon Riggs [EMAIL PROTECTED] writes:
On Wed, 2008-01-09 at 02:25 +, Gregory Stark wrote:
Without naming the DBA would have to specify the same ranges every time he
wants to change the properties. He might do a SET read_only WHERE created_on
'2000-01-01' one day then another SET
Hi,
Gregory Stark wrote:
That's an interesting thought. I think your caveats are right but with some
more work it might be possible to work it out. For example if a background
process processed the WAL and accumulated an array of possibly-dead tuples to
process in batch. It would wait whenever
Alvaro Herrera [EMAIL PROTECTED] writes:
Tom Lane wrote:
Comparing the behavior of this to my patch for HEAD, I am coming to the
conclusion that this is actually a *better* planning method than
removing the redundant join conditions, even when they're truly
rendundant! The reason emerges
Hi,
Simon Riggs wrote:
With that in mind, can I clarify what you're thinking, please?
Sure, I can try to clarify:
2) the things you've been discussing are essential requirements of
partitioning and we could never consider it complete until they are also
included and we must therefore talk
Gregory Stark [EMAIL PROTECTED] writes:
Alvaro Herrera [EMAIL PROTECTED] writes:
Would it be a good idea to keep removing redundant clauses and rethink
the preference for clauseful joins, going forward?
I don't understand what's going on here. The planner is choosing one join
order over
[EMAIL PROTECTED] (Simon Riggs) writes:
I think we have an opportunity to bypass the legacy-of-thought that
Oracle has left us and implement something more usable.
This seems like a *very* good thing to me, from a couple of
perspectives.
1. I think you're right on in terms of the issue of the
[EMAIL PROTECTED] (Markus Schiltknecht) writes:
Simon Riggs wrote:
With that in mind, can I clarify what you're thinking, please?
Sure, I can try to clarify:
2) the things you've been discussing are essential requirements of
partitioning and we could never consider it complete until they
On Wed, 2008-01-09 at 17:30 +0100, Markus Schiltknecht wrote:
Simon Riggs wrote:
With that in mind, can I clarify what you're thinking, please?
Sure, I can try to clarify:
2) the things you've been discussing are essential requirements of
partitioning and we could never consider it
Hi,
Simon Riggs wrote:
When I delete all rows WHERE some_date 'cut-off date' on a segment
boundary value that would delete all segments that met the criteria. The
following VACUUM will then return those segments to be read-write, where
they can then be refilled with new incoming data. The only
On Wed, 2008-01-09 at 18:04 +0100, Markus Schiltknecht wrote:
So not convinced of the need for named sections of tables yet. It all
seems like detail, rather than actually what we want for managing large
tables.
What do you think about letting the database system know the split point
Tom Lane [EMAIL PROTECTED] writes:
As an example, consider
t1 join t2 on (...) join t3 on (...) ... join t8 on (...)
and for simplicity suppose that each ON condition relates the new
table to the immediately preceding table, and that we can't derive
any additional join conditions
I wrote:
A perhaps less invasive idea is to discard any proposed mergeclauses
that are redundant in this sense. This would still require some
reshuffling of responsibility between select_mergejoin_clauses and
the code in pathkeys.c, since right now select_mergejoin_clauses
takes no account
Hackers;
I've noticed a strangeness on our cross-compiled uclibc linked
postgresql package that I was hoping to elicit some help with.
This is probably best described by showing some queries with
commentary, so on with that.
postgres=# select count(*) from pg_timezone_names where utc_offset !=
Chris Browne wrote:
_On The Other Hand_, there will be attributes that are *NOT* set in a
more-or-less chronological order, and Segment Exclusion will be pretty
useless for these attributes.
Really?I was hoping that it'd be useful for any data
with long runs of the same value repeated -
On Wed, Jan 09, 2008 at 11:47:31AM -0500, Chris Browne wrote:
[EMAIL PROTECTED] (Simon Riggs) writes:
I think we have an opportunity to bypass the legacy-of-thought that
Oracle has left us and implement something more usable.
This seems like a *very* good thing to me, from a couple of
Gregory Stark [EMAIL PROTECTED] writes:
So if I write (along with some other joins):
t1 join t2 on (t1.x=t2.x) where t1.x=3
I'll get a different result than if I write
t1, t2 where t1.x=3 and t2.x=3
In 8.3 you won't, because those are in fact exactly equivalent (and the
new EquivalenceClass
On Wed, 2008-01-09 at 20:03 +0100, Gavin Sherry wrote:
I think Simon's approach is
probably more complex from an implementation POV.
Much of the implementation is exactly the same, and I'm sure we agree on
more than 50% of how this should work already. We just need to close in
on the
Ron Mayer [EMAIL PROTECTED] writes:
Chris Browne wrote:
_On The Other Hand_, there will be attributes that are *NOT* set in a
more-or-less chronological order, and Segment Exclusion will be pretty
useless for these attributes.
Really?I was hoping that it'd be useful for any data
with
Hi,
Le Wednesday 09 January 2008 19:27:41 Simon Riggs, vous avez écrit :
The WHERE clause approach might easily allow more than 2 chunks and they
need not be logically contiguous. So the phrase split point doesn't
really fit because it implies a one dimensional viewpoint, but I'm happy
for
On Tue, 2008-01-08 at 02:12 +, Gregory Stark wrote:
I also don't understand how this proposal deals with the more common use case
of unloading and loading data. Normally in partitioned tables we build the
data in a side table until the data is all correct then load it as a
partition. If
On Wed, 2008-01-09 at 21:29 +0100, Dimitri Fontaine wrote:
Le Wednesday 09 January 2008 19:27:41 Simon Riggs, vous avez écrit :
The WHERE clause approach might easily allow more than 2 chunks and they
need not be logically contiguous. So the phrase split point doesn't
really fit because it
Tim Yardley [EMAIL PROTECTED] writes:
postgres=# select count(*) from pg_timezone_names where utc_offset != '00:00';
count
---
0
postgres=# select count(*) from pg_timezone_names where utc_offset != '00:00';
count
---
504
postgres=# select count(*) from pg_timezone_names
On Wed, Jan 09, 2008 at 02:38:21PM -0500, Chris Browne wrote:
Ron Mayer [EMAIL PROTECTED] writes:
Or am I missing something?
Well, this can head in two directions...
1. Suppose we're not using an organize in CLUSTER order approach.
If the data is getting added in roughly by order of
On Wed, Jan 09, 2008 at 08:17:41PM +, Simon Riggs wrote:
On Wed, 2008-01-09 at 20:03 +0100, Gavin Sherry wrote:
I think Simon's approach is
probably more complex from an implementation POV.
Much of the implementation is exactly the same, and I'm sure we agree on
more than 50% of how
On Wed, Jan 09, 2008 at 08:51:30PM +, Simon Riggs wrote:
That's what I would have done if it was easier to do with constraint
exclusion
(did only date partitioning), as the reporting queries will always have
some
server (stats by services, each service being installed on 1 or
The year to month and day to second intervals should not overlap. The
standard doesn't actually allow it IIRC.
wt
On Jan 9, 2008 7:17 AM, Tom Lane [EMAIL PROTECTED] wrote:
Ilya A. Kovalenko [EMAIL PROTECTED] writes:
I suggest one more standard date/time operator, to divide one interval
by
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On Wed, 9 Jan 2008 23:52:09 +0100
Gavin Sherry [EMAIL PROTECTED] wrote:
te restrictions.
Hmm, well if you found declaring the partitions a problem with
constraint exclusion it's not going to be any easier using other
declarative approaches.
Andrew Dunstan [EMAIL PROTECTED] writes:
The case below has just been reported to me. It sure looks odd. I'm
looking into it but any ideas would be welcome. The problem only occurs
if we are updating more than one row.
Pfree'ing something you didn't palloc is bad news...
Tom Lane wrote:
Andrew Dunstan [EMAIL PROTECTED] writes:
The case below has just been reported to me. It sure looks odd. I'm
looking into it but any ideas would be welcome. The problem only occurs
if we are updating more than one row.
Pfree'ing something you didn't palloc is bad
Andrew Dunstan wrote:
Tom Lane wrote:
Andrew Dunstan [EMAIL PROTECTED] writes:
The case below has just been reported to me. It sure looks odd. I'm
looking into it but any ideas would be welcome. The problem only
occurs if we are updating more than one row.
Pfree'ing something you
Hi Simon,
On Wed, Jan 02, 2008 at 05:56:14PM +, Simon Riggs wrote:
Segment Exclusion
-
After we note that a segment is read-only we can scan the segment and
record min/max values for all columns. These are then implicit
constraints, which can then be used for segment
Tim Yardley [EMAIL PROTECTED] writes:
Can you strace the backend while it's doing this and see if there's a
difference in the series of kernel calls issued?
See attached strace. Let me know if you see anything enlightening.
Nope :-(. The strace output is *exactly* the same across all four
Andrew Dunstan [EMAIL PROTECTED] writes:
BTW, if calling pfree() at all here is actually a bug, then we should
probably fix it in the back branches. It looks more to me like the
problem was that pg_convert_from was calling pfree() with the wrong
argument - src_encoding_name instead of
Hi Simon,
On Wed, Jan 09, 2008 at 03:08:08PM +, Simon Riggs wrote:
Do people really like running all that DDL? There is significant
manpower cost in implementing and maintaining a partitioning scheme,
plus significant costs in getting it wrong.
Well... that's impossible for me to say.
Warren Turkal escribió:
The year to month and day to second intervals should not overlap. The
standard doesn't actually allow it IIRC.
They do on Postgres anyway. Otherwise the type is not all that useful,
is it?
--
Alvaro Herrerahttp://www.CommandPrompt.com/
On Jan 10, 2008 2:17 AM, Tom Lane [EMAIL PROTECTED] wrote:
You'd have to define exactly what that means, which seems a little
tricky for incommensurate intervals. For instance what is the
result of '1 month' / '1 day' ?
Postgres has already made such definitions, to allow direct
On Jan 9, 2008 8:33 PM, Brendan Jurd [EMAIL PROTECTED] wrote:
I argued in a long-dead thread that we should disallow these kinds of
comparisons altogether, but I didn't manage to generate much
enthusiasm. The overall sentiment seemed to be that the slightly
bogus results were more useful than
I was wondering if there is a reason that the flex and bison and other
generated source files end up in the source directory when doing an
out-of-tree build. Would a patch that puts those files in the build
trees be accepted?
wt
---(end of
On Jan 10, 2008 3:33 PM, Brendan Jurd [EMAIL PROTECTED] wrote:
1 month is deemed equal to 30 days, 1 day is deemed equal to 24 hours
(although for some reason we ignore the issue of years vs. days).
Sorry, a correction. The issue of years vs. days isn't ignored. A
year is just 12 months,
On Jan 9, 2008 9:29 PM, Brendan Jurd [EMAIL PROTECTED] wrote:
Sorry, a correction. The issue of years vs. days isn't ignored. A
year is just 12 months, which yields 12 * 30 = 360 days, which is
actually a pretty significant error (1.4% on average).
YEAR TO MONTH and DAY TO
Warren Turkal [EMAIL PROTECTED] writes:
I was wondering if there is a reason that the flex and bison and other
generated source files end up in the source directory when doing an
out-of-tree build. Would a patch that puts those files in the build
trees be accepted?
Probably not, since our
Warren Turkal [EMAIL PROTECTED] writes:
YEAR TO MONTH and DAY TO {HOUR,MINUTE,SECOND} intervals should not
combine. PostgreSQL correctly doesn't allow {YEAR,MONTH} TO
{DAY,HOUR,MINUTE,SECOND} intervals,
Really? I think you've confused some unimplemented decorative syntax
with what the
-Original Message-
From: [EMAIL PROTECTED] [mailto:pgsql-hackers-
[EMAIL PROTECTED] On Behalf Of Tom Lane
Sent: Wednesday, January 09, 2008 10:00 PM
To: Warren Turkal
Cc: Brendan Jurd; Ilya А. Кovalenko; pgsql-hackers@postgresql.org
Subject: Re: [HACKERS] operator suggest interval
Brendan Jurd [EMAIL PROTECTED] writes:
On Jan 10, 2008 2:17 AM, Tom Lane [EMAIL PROTECTED] wrote:
You'd have to define exactly what that means, which seems a little
tricky for incommensurate intervals. For instance what is the
result of '1 month' / '1 day' ?
Postgres has already made such
On Jan 10, 2008 5:00 PM, Tom Lane [EMAIL PROTECTED] wrote:
The spec's approach to datetime operations in general is almost totally
brain-dead, and so you won't find a lot of support around here for hewing
to the straight-and-narrow-spec-compliance approach. If they have not
even heard of
On Jan 9, 2008 10:44 PM, Brendan Jurd [EMAIL PROTECTED] wrote:
On Jan 10, 2008 5:00 PM, Tom Lane [EMAIL PROTECTED] wrote:
The spec's approach to datetime operations in general is almost totally
brain-dead, and so you won't find a lot of support around here for hewing
to the
Brendan Jurd [EMAIL PROTECTED] writes:
On Jan 10, 2008 5:00 PM, Tom Lane [EMAIL PROTECTED] wrote:
The spec's approach to datetime operations in general is almost totally
brain-dead, ...
It's true that the spec fails to consider DST, in that it doesn't
partition day and second intervals
On Jan 9, 2008 11:06 PM, Tom Lane [EMAIL PROTECTED] wrote:
Brendan Jurd [EMAIL PROTECTED] writes:
On Jan 10, 2008 5:00 PM, Tom Lane [EMAIL PROTECTED] wrote:
The spec's approach to datetime operations in general is almost totally
brain-dead, ...
It's true that the spec fails to consider
On Thu, 2008-01-10 at 03:06 +0100, Gavin Sherry wrote:
If the exclusion is executor driven, the planner cannot help but
create a seq scan plan. The planner will think you're returning 100X
rows when really you end up returning X rows. After that, all
decisions made by the planner are totally
On Jan 9, 2008 10:00 PM, Tom Lane [EMAIL PROTECTED] wrote:
Really? I think you've confused some unimplemented decorative syntax
with what the underlying datatype will or won't do.
Fair enough. The underlying type certainly will do it since it works
without the opt_interval.
This is
On Jan 9, 2008 9:51 PM, Tom Lane [EMAIL PROTECTED] wrote:
Warren Turkal [EMAIL PROTECTED] writes:
I was wondering if there is a reason that the flex and bison and other
generated source files end up in the source directory when doing an
out-of-tree build. Would a patch that puts those
82 matches
Mail list logo