Greg Smith wrote:
I'm not the sort to be too concerned myself that
the guy who thinks he's running a DW on a system with 64MB of RAM might
get bad settings, but it's a fair criticism to point that out as a problem.
In defense of thinking about very small configurations, I've seen many
cases
Joshua D. Drake wrote:
On Thu, 2008-12-04 at 10:20 -0800, Ron Mayer wrote:
Greg Smith wrote:
I'm not the sort to be too concerned myself that
the guy who thinks he's running a DW on a system with 64MB of RAM might
get bad settings, but it's a fair criticism to point that out as a problem
Stephen R. van den Berg wrote:
... it would be orders of magnitude more difficult for
a novice to create the sample database from contrib or anywhere else.
It seems to me that *this* is the more serious problem that
we should fix instead.
If, from the psql command prompt I could type:
psql=#
Michael Meskes wrote:
On Wed, Nov 12, 2008 at 02:28:56PM -0800, Ron Mayer wrote:
Merging of the interval style into ecpg attached.
Thanks for caring about the ecpg changes too.
Thanks for the comments. Updated the patch.
I know little enough about ecpg that I can't really tell
Tom Lane wrote:
Ron Mayer [EMAIL PROTECTED] writes:
Once this settles I suppose I should post a ECPG patch that's based
off of these Decode/Encode interval functions too?
Yeah, if you want. I think you'll find that the datetime code has
drifted far enough since ecpg forked it that you'll
Brendan Jurd wrote:
On Sat, Nov 1, 2008 at 3:42 PM, Ron Mayer [EMAIL PROTECTED] wrote:
# Patch 3: cleanup.patch
Fix rounding inconsistencies and refactor interval input/output code
Compile, testing and regression tests all checked out.
I've picked up on a few code style issues, fixes
Brendan Jurd wrote:
On Wed, Nov 12, 2008 at 5:32 AM, Ron Mayer
[EMAIL PROTECTED] wrote:
Brendan Jurd wrote:
* AdjustFractionalSeconds = AdjustFractSeconds
* AdjustFractionalDays = AdjustFractDays
Otherwise many lines were over 80 chars long.
And it happened often enough I thought
Tom Lane wrote:
...failure case ... interval 'P-1Y-2M3DT-4H-5M-6';
This isn't the result I'd expect, and AFAICS the ISO spec does *not*
allow any unit markers to be omitted in the format with designators.
Yes, this is true. I see you already made the change.
Tom Lane wrote:
Applied with
Tom Lane wrote:
The original INT64 coding here is exact (at least for the common case
where fval is zero) but I'm not convinced that your revision can't
suffer from roundoff error.
Good point. I'll study this tonight; and either try to make a patch
that'll be exact where fval's zero or try to
Brendan Jurd wrote:
On Sat, Nov 8, 2008 at 2:19 AM, Ron Mayer [EMAIL PROTECTED] wrote:
Hmmm... Certainly what I had in datatype.sgml was wrong, but I'm
now thinking 5.5.4.2.1 and 5.5.4.2.2 would be the most clear?
Sorry, I don't understand what you mean by 5.5.4.2.1. In the spec
Ah
Ron Mayer wrote:
Ah! That 5.5.4.2.1 comes from apparently an old Oct 2000
draft version of the spec titled ISO/FDIS 8601. (For now you can
see it here: http://0ape.com/postgres_interval_patches/ISO-FDIS-8601.pdf )
I'll fix all the links to point to the 2004 spec.
I updated my web site[1
Tom Lane wrote:
Ron Mayer [EMAIL PROTECTED] writes:
Brendan Jurd wrote:
...I did notice one final ...
Just checked in a fix to that one; and updated my website at
http://0ape.com/postgres_interval_patches/
and pushed it to my (hopefully fixed now) git server.
Applied with some revisions: I
Tom Lane wrote:
Ron Mayer [EMAIL PROTECTED] writes:
Rather than forcing Postgres mode; couldn't it put a
set intervalstyle = [whatever the current interval style is]
in the dump file?
This would work for loading into a PG = 8.4 server, and fail miserably
for loading into pre-8.4 servers
Tom Lane wrote:
Ron Mayer [EMAIL PROTECTED] writes:
(3) Put something into the dump file that will make the old
server reject the file rather than successfully loading
wrong data? (Some if intervalstyle==std and version8.3
abort loading the restore logic?)
There isn't any
Tom Lane wrote:
Ron Mayer [EMAIL PROTECTED] writes:
Tom Lane wrote:
The trouble is that older servers will (by default) report
an error on that line and keep right on chugging.
Not necessarily. Couldn't we put
select * from (select substring(version() from '[0-9\.]+') as version
Tom Lane wrote:
Oh, I see what you're trying to do. The answer is no. We're not going
to totally destroy back-portability of dumps, especially not for a
problem that won't even affect most people (negative intervals are
hardly common).
Similarly I wonder if pg_dump should add a fail if
Tom Lane wrote:
BTW, I just noticed that CVS HEAD has a bug in reading negative SQL-spec
literals:
regression=# select interval '-2008-10';
regression=# select interval '--10';
Surely the latter must mean -10 months. This is orthogonal to the
current patch ...
Perhaps the below
Tom Lane wrote:
Another thought here ... I'm looking at the sign hack
+ if (IntervalStyle == INTSTYLE_SQL_STANDARD
and not liking it very much. Yes, it does the intended thing for strict
SQL-spec input, but it seems to produce a bunch of weird corner cases
for non-spec
Brendan Jurd wrote:
On Fri, Nov 7, 2008 at 3:35 AM, Ron Mayer [EMAIL PROTECTED] wrote:
I think I updated the web site and git now, and
'P-00-01' is now accepted. It might be useful if
someone double checked my reading of the spec, tho.
I've tested out your latest revision and read
Tom Lane wrote:
I've started reviewing this patch for commit, and I find myself a bit
disturbed by its compatibility properties. The SQL_STANDARD output
style is simply ambiguous: what is meant by
-1 1:00:00
? What you get from that will depend on the intervalstyle setting at
the
Ron Mayer wrote:
Tom Lane wrote:
*pg_dump had better force Postgres mode*. We can certainly do that with
a couple more lines added to the patch, but it's a bit troublesome that
we are boxed into using a nonstandard dump-data format until forever.
Ok. I see that is the concern..
Rather than
Tom Lane wrote:
ISO date format is read the same regardless of recipient's datestyle,
so pg_dump solves this by forcing the dump to be made in ISO style.
The corresponding solution for intervals will be to dump in POSTGRES
style, not SQL_STANDARD style, which seems a bit unfortunate.
[reading
Tom Lane wrote:
Ron Mayer [EMAIL PROTECTED] writes:
Rather than forcing Postgres mode; couldn't it put a
set intervalstyle = [whatever the current interval style is]
in the dump file?
This would work for loading into a PG = 8.4 server, and fail miserably
for loading into pre-8.4 servers
Brendan Jurd wrote:
I've applied them with a couple minor changes.
* If ISO 8601 5.5.3.1.d's statement The designator T shall be
absent if all of the time components are absent. also applies
to 5.5.4.2.2; then I think the 'T' needed to be inside the
optional tags, so I moved it there. The link
Ron Mayer wrote:
Brendan Jurd wrote:
'T' ... optional
Indeed that's a bug in my code; where I was sometimes
requiring the 'T' (in the ISO8601 alternative format) and
sometimes not (in the ISO8601 format from 5.5.4.2.1).
Below's a test case. If I read the spec[1] right both of those
should
Brendan Jurd wrote:
On Wed, Nov 5, 2008 at 7:34 AM, Ron Mayer [EMAIL PROTECTED] wrote:
Brendan Jurd wrote:
...new interval
Review of the other two patches coming soon to a mail client near you.
Oh - and for review of the next patch, ISO 8601's spec would no doubt
be useful.
I think
Brendan Jurd wrote:
Reviewing this patch now; I'm working from the 'iso8601' branch in
... I thought I'd post a patch of my own (against your branch)
and accompany it with a few explanatory notes.
Wow thanks! That's very helpful (though it might have been more
fair to your time if you just
Brendan Jurd wrote:
...Sep 18, 2008... Ron Mayer [EMAIL PROTECTED] wrote:
The attached patch
(1) adds a new GUC called IntervalStyle that decouples interval
output from the DateStyle GUC, and
(2) adds a new interval style that will match the SQL standards
for interval
Ah. And one final question regarding functionality.
It seems to me that the last remaining place where we input
a SQL-2008 standard literal and do something different from
what the standard suggests is with the string:
'-1 2:03:04'
The standard seems to say that the - affects both the
days
Tom Lane wrote:
Ron Mayer [EMAIL PROTECTED] writes:
Ah. And one final question regarding functionality.
It seems to me that the last remaining place where we input
a SQL-2008 standard literal and do something different from
what the standard suggests is with the string:
'-1 2:03:04
Brendan Jurd wrote:
...Sep 18, 2008...Ron Mayer [EMAIL PROTECTED] wrote:
(1) ...GUC called IntervalStyle...
(2) ...interval style that will match the SQL standards...
...an initial review...
When I ran the regression tests, I got one failure in the new interval
Fixed, and I did
; but no doubt there could be
other bad habits I have as well.
Ron
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Ron Mayer wrote:
Tom Lane wrote:
In fact, given that we are now
somewhat SQL-compliant on interval input, a GUC that selected
PG traditional, SQL-standard, or ISO 8601 interval output format seems
like it could be a good idea.
Attached are updated versions of the Interval patches (SQL
Ron Mayer wrote:
Ron Mayer wrote:
Tom Lane wrote:
In fact, given that we are now
somewhat SQL-compliant on interval input, a GUC that selected
PG traditional, SQL-standard, or ISO 8601 interval output format seems
like it could be a good idea.
Attached are updated versions
Ron Mayer wrote:
Ron Mayer wrote:
Ron Mayer wrote:
Tom Lane wrote:
In fact, given that we are now
somewhat SQL-compliant on interval input, a GUC that selected
PG traditional, SQL-standard, or ISO 8601 interval output format
seems
like it could be a good idea.
Attached are updated
Jeff Davis wrote:
Currently, we use correlation to estimate the I/O costs of an index
scan. However, this has some problems:
It certainly helps some cases.
Without the patch, the little test script below ends up picking the
third fastest plan (a seq-scan) instead of a faster bitmapscan, or
an
Tom Lane wrote:
Ron Mayer [EMAIL PROTECTED] writes:
...bitmap cost estimates didn't also change much
By definition, a bitmap scan's cost isn't affected by index order
correlation.
No? I think I understand that for index scans the correlation
influenced how many data pages
Tom Lane wrote:
A bad estimate for physical-position correlation has only limited
impact,
Ah! This seems very true with 8.3 but much less true with 8.0.
On a legacy 8.0 system I have a hard time avoiding cases where
a query like
select * from addresses where add_state_or_province = 'CA';
Robert Haas wrote:
I think the real question is: what other kinds of correlation might
people be interested in representing?
Yes, or to phrase that another way: What kinds of queries are being
poorly optimized now and why?
The one that affects our largest tables are ones where we
have an
Josh Berkus wrote:
Yes, or to phrase that another way: What kinds of queries are being
poorly optimized now and why?
Well, we have two different correlation problems. One is the problem of
dependant correlation, such as the 1.0 correlation of ZIP and CITY fields
as a common problem. This
[EMAIL PROTECTED] wrote:
So it seems that intagg should rather live in a section examples than
in contrib?
Perhaps. Seems my old intagg use case from 8.1 is not really needed
anymore since it seems ANY got much smarter since then. Cool.
Josh Berkus wrote:
So it sounds like intagg is still in use/development. But ... is it
more of an example, or is it useful as a type/function in production?
Where I work we (and our customers) use it in our production systems.
At first glance it seems our reasons for using it are mostly
Tom Lane wrote:
In particular, if the OS lays out successive file pages in a way that
provides zero latency between logically adjacent blocks, I'd bet a good
bit that a Postgres seqscan would miss the read timing every time, and
degrade to handling about one block per disk rotation.
Unless the
Josh Berkus wrote:
intagg: ... Has not been updated since 2001.
Really? Just a couple years ago (2005) bugs we reported were
still getting fixed in it:
http://archives.postgresql.org/pgsql-bugs/2005-03/msg00202.php
http://archives.postgresql.org/pgsql-bugs/2005-04/msg00165.php
Here's one
:
Ron Mayer [EMAIL PROTECTED] writes:
[some other interval rounding example]
I don't much like the forced rounding to two digits here, but changing
that doesn't seem like material for back-patching. Are you going to
fix that up while working on your other patches?
--
Sent via pgsql-hackers
Tom Lane wrote:
In the integer-timestamp world we know that the number is exact in
microseconds. We clearly ought to be prepared to display up to six
fractional digits, but suppressing trailing zeroes in that seems
appropriate.
Great.
We could try to do the same in the float case, but I'm a
Tom Lane wrote:
Ron Mayer [EMAIL PROTECTED] writes:
Tom Lane wrote:
We could try to do the same in the float case, but I'm a bit worried
about finding ourselves showing 1234567.79 ...
If I understand the code right [I didn't...]
The problem is ... seconds field that includes hours
HEAD and 8.3.
Ron Mayer
== ON HEAD
regression=# set datestyle to sql;
SET
regression=# select '-10 mons -3 days +03:55:06.70
Ron Mayer wrote:
Ron Mayer wrote:
Tom Lane wrote:
...GUC that selected PG traditional, SQL-standard... interval output
format seems like it could be a good idea.
This is an update to the earlier SQL-standard-interval-literal output
patch that I submitted here:
http://archives.postgresql.org
Ron Mayer wrote:
Tom Lane wrote:
In fact, given that we are now
somewhat SQL-compliant on interval input, a GUC that selected
PG traditional, SQL-standard, or ISO 8601 interval output format seems
like it could be a good idea.
This patch (that works on top of the IntervalStyle patch I
posted
Tom Lane wrote:
Yeah, bug all the way back --- applied.
I don't much like the forced rounding to two digits here, but changing
that doesn't seem like material for back-patching. Are you going to
fix that up while working on your other patches?
Gladly. I hate that too.
I think I can also
Ron Mayer wrote:
Tom Lane wrote:
Yeah, bug all the way back --- applied.
I don't much like the forced rounding to two digits here, but changing
that doesn't seem like material for back-patching. Are you going to
fix that up while working on your other patches?
Gladly. I hate that too
Gevik Babakhani wrote:
Has there been any idea to port PG to a more modern programming language
like C++? Of course there are some minor obstacles like a new OO design,
this being a gigantic task to perform and rewriting almost everything etc...
I am very interested to hear your opinion.
Gregory Stark wrote:
Simon Riggs [EMAIL PROTECTED] writes:
I'm not in favour of introducing the concept of spindles
In principle I quite strongly disagree with this
Number of blocks to prefetch is an internal implementation detail that the DBA
has absolutely no way to know what the
the existing regression tests.
Does this seem reasonable?
Ron
*** a/src/backend/utils/adt/datetime.c
--- b/src/backend/utils/adt/datetime.c
***
*** 2888,2894 DecodeInterval(char **field, int *ftype, int nf, int range,
{
case DTK_MICROSEC:
#ifdef HAVE_INT64_TIMESTAMP
Gregory Stark wrote:
Ron Mayer [EMAIL PROTECTED] writes:
I'd rather a parameter that expressed things more in terms of
measurable quantities [...]
...What we're
dealing with now is an entirely orthogonal property of your system: how many
concurrent requests can the system handle.
Really
Ron Mayer wrote:
Tom Lane wrote:
...GUC that selected PG traditional, SQL-standard... interval output
format seems like it could be a good idea.
This is an update to the earlier SQL-standard-interval-literal output
patch that I submitted here:
http://archives.postgresql.org/message-id
Steve Crawford wrote:
Tom Lane wrote:
Yeah. What this is about is how long the *community* supports 7.4...
Is there any way to poll the community and see how much people
in the community care about 7.4 community support?
It seems possible that most people with large important 7.4 systems
Tom Lane wrote:
Stephen R. van den Berg [EMAIL PROTECTED] writes:
Intervals are a scalar, not an addition of assorted values, alternating signs
between fields would be wrong.
Sorry, you're the one who's wrong on that. We've treated intervals as
three independent fields for years now (and
designators, which
+ * are pretty ugly. The format looks something like
+ * P1Y1M1DT1H1M1.12345S
+ * but useful for exchanging data with computers instead of humans.
+ * - ron 2003-07-14
+ *
+ * And ISO's SQL 2008 standard specifies standards for
+ * year-month literals (that look like
the spec right, are there any problems with this,
and if not, could I ask that the patch at the end of this email
be applied?
Ron
===
== with this patch
Tom Lane wrote:
Ron Mayer [EMAIL PROTECTED] writes:
Short summary:
I think this patch fixes a bug with sql-spec negative interval literals.
Hmm. I'm a bit concerned about possible side-effects on other cases:
what had been seen as two separate tokens will now become one token
for *all
Tom Lane wrote:
If I read SQL 200N's spec correctly
select interval '-1 1:00:00';
should mean-1 days -1 hours,
yet 8.3 sees it as -1 days +1 hours.
I think we are kind of stuck on this one. If we change it, then how
would one represent -1 days +1 hours? The spec's format is only
Unless I'm compiling stuff wrong, it seems HEAD is giving me
slightly different output on Intervals than 8.3 in the roundoff
of seconds. 8.3 was rounding to the nearest fraction of a second,
HEAD seems to be truncating.
In the psql output below it shows 8.3.1 outputting 6.70 secs
while the
Kevin Grittner wrote:
...not the only place where the float-timestamps code has
rounding behavior that doesn't appear in the integer-timestamps
code. ...
I find the results on 8.3.3 with integer timestamps surprising:
Agreed it's surprising and agreed there are more places.
Sounds like I
Tom Lane wrote:
This is not the only place where the float-timestamps code has rounding
behavior that doesn't appear in the integer-timestamps code.
Yeah... For that matter, I find this surprising as well:
regression=# select '0.7 secs'::interval, ('7 secs'::interval/10);
interval |
Tom Lane wrote:
support for SQL-spec interval literals. I decided to go look at exactly
how unfinished it was, and it turns out that it's actually pretty close.
Hence the attached proposed patch ;-)
Is this code handling negative interval literals right?
I think I quote the relevant spec part
Tom Lane wrote:
Ron Mayer [EMAIL PROTECTED] writes:
Is this code handling negative interval literals right?
I think I quote the relevant spec part at the bottom.
We support independent signs for the different components of the
Even so it surprises me that:
'-1-1'::interval gives me a day
Tom Lane wrote:
Ron Mayer [EMAIL PROTECTED] writes:
interval ... sql_standard...iso_8601...
backward_compatible ...depends... on ... DateStyle...
...How about decoupling interval_out's behavior
from DateStyle altogether, and instead providing values of IntervalStyle
that match all
Tom Lane wrote:
Ron Mayer [EMAIL PROTECTED] writes:
... ISO 8601 intervals ...
On the output side, seems like a GUC variable
is the standard precedent here. I'd still vote against overloading
DateStyle --- it does too much already --- but a separate variable for
interval style wouldn't
Tom Lane wrote:
somewhat SQL-compliant on interval input, a GUC that selected
PG traditional, SQL-standard, or ISO 8601 interval output format seems
like it could be a good idea.
Trying to do the SQL-standard output now, and have a question
of what to do in the SQL-standard mode when trying to
Tom Lane wrote:
The reason it's not SQL-standard is the data value isn't.
So not a problem. Someone conforming to the spec limits on
what he puts in will see spec-compliant output. I think all
you need is 'yyy-mm dd hh:mm:ss' where you omit yyy-mm if
zeroes, omit dd if zero, omit hh:mm:ss if
Ron Mayer wrote:
Tom Lane wrote:
you need is 'yyy-mm dd hh:mm:ss' where you omit yyy-mm if
zeroes, omit dd if zero, omit hh:mm:ss if zeroes (but maybe
only if dd is also 0? otherwise your output is just dd which
is uncomfortably ambiguous).
Oh, and if both parts are 0, I guess we desire
Tom Lane wrote:
I think all
you need is 'yyy-mm dd hh:mm:ss' where you omit yyy-mm if
zeroes, omit dd if zero, omit hh:mm:ss if zeroes (but maybe
only if dd is also 0? otherwise your output is just dd which
is uncomfortably ambiguous).
Cool. I think I have it pretty much working with a new
Tom Lane wrote:
Kevin Grittner [EMAIL PROTECTED] writes:
Tom Lane [EMAIL PROTECTED] wrote:
I am not sure about some of the corner cases --- anyone want to see if
their understanding of the spec for interval string is different?
The patch seems to support extensions to the standard.
Right.
Tom Lane wrote:
Ron Mayer [EMAIL PROTECTED] writes:
Back a while ago (2003) there was some talk about replacing
some of the non-standard extensions with shorthand forms of
intervals with ISO 8601 intervals that have a similar but
not-the-same shorthand.
I think *replacement* would be a hard
Tom Lane wrote:
The other problem is that the SQL spec clearly defines an interval
literal syntax, and it's not this ISO thing. So even without backward
compatibility issues, 8601-only doesn't seem like it would fly.
Oh. I wasn't proposing 8601-only. Just the one-character
shorthands like
Tom Lane wrote:
Ron Mayer [EMAIL PROTECTED] writes:
'1Y1M'::interval ... minute ... month
Hmmm. I would say that the problem with that is not that it's
nonstandard but that it's ambiguous.
Ah yes.
Our documentation...says...or abbreviations.
...What if we just tweak the code to
reject
Robert Haas wrote:
bits...bytes...blocks...m...M
I can't imagine that taking away the B is somehow going to
be more clear.
If clarity is the goal, I'd want the following:
a) Verbosely spelling out the units in the default config file
temp_buffers = 16 megabytes
or
temp_buffers = 16
Marko Kreen wrote:
Thirdly, please don't use standard units argument, unless you plan to
propose use of KiB, MiB, GiB at the same moment.
In defense of standard units, if the postgres docs say
Postgres will round up to the nearest power of 2
kB and MB seem very clear to me. If we want to
Peter Eisentraut wrote:
On Tuesday 19 August 2008 22:12:47 Greg Sabino Mullane wrote:
Text space is cheap,
I'd offer the alternative theory that anything that is longer than one screen
is overwhelming and unwieldy.
One more benefit of a small file is that it makes it easier to ask someone
Bruce Momjian wrote:
Josh Berkus wrote:
...simple web applications, where
queries are never supposed to take more than 50ms. If a query turns up
with an estimated cost of 100, then you know something's wrong;
...
How about a simpler approach that throws an error or warning for
Tom Lane wrote:
Hannu Krosing [EMAIL PROTECTED] writes:
AFAIK, there is nothing that requires pl/perl, pl/tcl or pl/python to be
in core either.
True, but I think it's a good idea to have at least one such in core,
as a prototype to help us track the issues associated with loading a
large
Tom Lane wrote:
Gregory Stark [EMAIL PROTECTED] writes:
Manoel Henrique [EMAIL PROTECTED] writes:
Yes, I'm relying on the assumption that backwards scan has the same cost as
forward scan, why shouldn't it?
G...we expect that forward scans will result
in the kernel doing read-ahead, ...
A
Tom Lane wrote:
What I think would perhaps be worth investigating is a compile-time
(or at latest initdb-time) option that flips the case folding behavior
to SQL-spec-compliant and also changes all the built-in catalog entries
to upper case. We would then have a solution we could offer to
chris wrote:
C++0x standards
committee where they finalized long long as being required to be 8
AFAIK, we oughtn't care what C++ standards say, because PostgreSQL is
implemented in C, and therefore needs to follow what the *C* standards
say.
I agree the C++ standards should matter one bit
Simon Riggs wrote:
IMHO we should have a single parameter which indicates how much planning
time we consider acceptable for this query. e.g.
optimization_level = 2 (default), varies 1-3
Couldn't the planner itself make a good guess if it should
keep trying based on the estimated cost?
if
Tom Lane wrote:
Ron Mayer [EMAIL PROTECTED] writes:
Couldn't the planner itself make a good guess if it should
keep trying based on the estimated cost?
if (the_best_plan_I_found_so_far_looks_like_itll_take_an_hour)
keep_optimizing_for_a_few_minutes
Tom Lane wrote:
Another issue is that it might not be possible to update a page for
lack of space. Are we prepared to assume that there will never be a
transformation we need to apply that makes the data bigger? In such a
situation an in-place update might be impossible, and that certainly
Gregory Stark wrote:
Joshua D. Drake [EMAIL PROTECTED] writes:
...default_statistics_target?...Uhh 10.
Ah, but we only ever hear about the cases where it's wrong of course. In other
words even if we raised it to some optimal value we would still have precisely
the same experience of seeing
Joshua D. Drake wrote:
Tom Lane wrote:
Peter Eisentraut [EMAIL PROTECTED] writes:
- If we know better values, why don't we set them by default?
The problem is: better for what?
That is where some 80% solution sample config files come in.
+1.
At work I use 3 templates.
* One for
Tom Lane wrote:
How far could we get with the answers to just three questions:
* How many concurrent queries do you expect to have?
* How much RAM space are you willing to let Postgres use?
* How much overhead disk space are you willing to let Postgres use?
+1 to this approach - these are the
Steve Atkins wrote:
... cross-platform (Windows, Linux, Solaris, OS X as a bare
minimum)
I wonder how cross-platform the tuning algorithm itself is.
I could also imagine that decisions like do I let the OS page
cache, or postgres's buffer cache get most of the memory are
extremely OS
Gregory Stark wrote:
I think we do a pretty good job of this already. Witness things like
effective_cache_size -- imagine if this were nested_loop_cache_hit_rate for
example, good luck figuring out what to set it to.
I think either of these are fine if we describe how to measure
them.
Tom Lane wrote:
Simon Riggs [EMAIL PROTECTED] writes:
On Wed, 2008-04-23 at 12:07 -0400, Tom Lane wrote:
To be acceptable, a GIT patch would have to be optional and it
...
I was considering a new pg_index column. Or else we'd have to fix
the storage-parameter infrastructure to support
Decibel! wrote:
we can just look at
the hit rate for the object. But we'd also need stats for how often we
find pages for a relation in the OS cache, which no one has come up with
a good method for.
Makes me wonder if we could (optionally, I guess, since timing
stuff is apparently slow on
Heikki Linnakangas wrote:
Ron Mayer wrote:
One use case that I think GIT would help a lot with are my
large address tables that are clustered by zip-code but
often queried by State, City, County, School District,
Police Beat, etc.
I imagine a GIT index on state would just occupy
a couple
Heikki Linnakangas wrote:
* GIT (Grouped Index Tuple) indexes, which achieve index space savings
in btrees by having a single index tuple represent multiple heap tuples
[...]
Another issue is that we'd need to check how much of the use-case for
GIT has been taken over by HOT.
There is,
Aidan Van Dyk wrote:
* Greg Sabino Mullane [EMAIL PROTECTED] [080403 09:54]:
I emphatically do NOT mean
move to pgfoundry, which is pretty much a kiss of death.
But that begs the question of *why* it's a kiss of death?
For instance, in perl land, having something in CPAN and not in
perl
D'Arcy J.M. Cain wrote:
Check out NetBSD pkgsrc as a model. It is very flexible. One nice
thing would be the ability to specify where the packages are rather
than always insisting that they be on pgfoundry.
Yup - a feature shared by RubyGems:
gem install rails –source
101 - 200 of 368 matches
Mail list logo