installers whether
from pgfoundry or elsewhere.
Ron
PS: Regarding pgfoundry and credibility; it seems the stature
and image of pgfoundry would go up a lot if postgresql itself
were hosted there. But no, I'm not advocating that.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org
Sam Mason wrote:
On Thu, Apr 03, 2008 at 07:07:56PM +0200, Svenne Krap wrote:
ID serial
Username varchar
Password_md5 varchar
Password_sha1 varchar
...
Why not just use SHA-512, you get many more quality bits that way.
Or if he just wanted to use builtin tools and reduce accidental
Andrew Dunstan wrote:
Tom Lane wrote:
as having better system support for packages or modules or whatever
you want to call them; and maybe we also need some marketing-type
...re-raise the question of getting rid of contrib...
The PostgreSQL Standard Modules.
While renaming, could we go
Andrew Dunstan wrote:
I think it'd be especially cool if one could one-day have a command
pg_install_module [modulename] -d [databasename]
Yes, and the CPAN analogy that has been in several minds, but it only
goes so far. Perl and Ruby are languages - Postgres is a very different
animal.
Magnus Hagander wrote:
On Tue, Feb 26, 2008 at 08:28:11AM -0500, Andrew Dunstan wrote:
Simon Riggs wrote:
Separate files seems much simpler...
Yes, We need to stick to the KISS principle.
ISTM that we could simply invent a new archive format of d for directory.
Yeah, you can always ZIP (or
Magnus Hagander wrote:
If they don't have an actual database, it's fairly common to use SQLite or
similar just to get proper database storage for it.
With all the concern about parsing in this thread, perhaps it's best
if this config-overrides file not be of the same syntax as postgresql.conf
Wouldn't seeing which patches are trickling in during the first months
of 8.4 development give a better indication of when it should be
freezable? I'm all in favor of having lots of advance notice and
predictable schedules --- but it seems in the next month or so we'll
have a lot more insight of
Decibel! wrote:
Yes, this problem goes way beyond OOM. Just try and configure
work_memory aggressively on a server that might see 50 database
connections, and do it in such a way that you won't swap. Good luck.
That sounds like an even broader and more difficult problem
than managing memory.
Simon Riggs wrote:
Can I ask when the Feature Freeze for next release will be?
Also, from http://www.postgresql.org/about/press/faq
Q: When will 8.4 come out?
A: Historically, PostgreSQL has released approximately
every 12 months and there is no desire in the community
to
Tom Lane wrote:
Alvaro Herrera [EMAIL PROTECTED] writes:
... OOM_Killer
Egad. Whoever thought *this* was a good idea should be taken out
and shot:
If I read this right, http://lkml.org/lkml/2007/2/9/275 even the
shared memory is counted many times (once per child) for the
parent
Alvaro Herrera wrote:
Yeah, the only way to improve the OOM problem would be to harass the
Linux developers to tweak badness() so that it considers the postmaster
to be an essential process rather than the one to preferentially kill.
Wouldn't the more general rule that Jeff Davis pointed out
Tom Lane wrote:
Kevin Grittner [EMAIL PROTECTED] writes:
Tom Lane [EMAIL PROTECTED] wrote:
Or is someone prepared to argue that there are no applications out
there that will be broken if the same query, against the same unchanging
table, yields different results from one trial to the
Jeff Davis wrote:
On Mon, 2008-01-28 at 23:13 +, Heikki Linnakangas wrote:
clusteredness didn't get screwed up by a table that looks like this:
5 6 7 8 9 1 2 3 4
...test table with a similar
distribution to your example, and it shows a correlation of about -0.5,
but it should
Mark Mielke wrote:
Mark Mielke wrote:
Counts, because as we all know, PostgreSQL count(*) is slow, and in
any case, my count(*) is not on the whole table, but on a subset.
Doing this in a general way seems complex to me as it would need to be
able to evaluate whether a given INSERT or UPDATE
Gavin Sherry wrote:
CREATE TABLE is modified to accept a PARTITION BY clause. This clause
contains one or more partition declarations. The syntax is as follows:
PARTITION BY {partition_type} (column_name[, column_name...])
[PARTITIONS number]
(
partition_declaration[,
hris Browne wrote:
_On The Other Hand_, there will be attributes that are *NOT* set in a
more-or-less chronological order, and Segment Exclusion will be pretty
useless for these attributes.
Short summary:
With the appropriate clustering, ISTM Segment Exclusion
can be useful on all columns
Chris Browne wrote:
_On The Other Hand_, there will be attributes that are *NOT* set in a
more-or-less chronological order, and Segment Exclusion will be pretty
useless for these attributes.
Really?I was hoping that it'd be useful for any data
with long runs of the same value repeated -
Andrew Sullivan wrote:
On Mon, Jan 07, 2008 at 07:16:35PM +0100, Markus Schiltknecht wrote:
...the requirements: no single tuple in the segment may be
significantly out of the average bounds. Otherwise, the min/max gets
pretty useless and the segment can never be excluded.
The segment can
Gregory Stark wrote:
Note that speeding up a query from 20s to 5s isn't terribly useful.
I disagree totally with that.
That is the difference between no chance of someone waiting for a web
page to load; vs. a good chance they'd wait. And 2s vs 0.5s is the
difference between a web site that
Mark Mielke wrote:
I am curious - what algorithms exist to efficiently do a parallel sort?
Do you mean if sorting 1 million items, it is possible to separate this
into 2 sets of 500 thousand each, execute them in separate threads
(with task administration and synchronization overhead) ,
Tom Lane wrote:
...I can believe that suitable test cases would show
2X improvement for 2 threads,
One other thing I found interesting is that their test case
showed a near 2X improvement for hyperthreading; where I haven't
heard of many other ways to get hyperthreading to show improvements
Has anyone looked into sorting algorithms that could use
more than one CPU or core at a time?
Benchmarks I see[1][2] suggest that sorting is an area that
improves greatly with multiple processors and even with
multi-threading on a single core processor.
For 1-processor and 2-threads (1p2t),
.)
- --
Ron Johnson, Jr.
Jefferson LA USA
%SYSTEM-F-FISH, my hovercraft is full of eels
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
iD8DBQFHUh9nS9HxQb37XmcRAjPTAJ4jRUZUaF+j2KAB3+lBY6A3ROfynACfawWT
0QN026Ncl/Iag2M6E1kfjUg=
=RlXy
-END PGP SIGNATURE
Robert Treat wrote:
On Tuesday 27 November 2007 15:07, Simon Riggs wrote:
On Tue, 2007-11-27 at 14:02 -0500, Tom Lane wrote:
There has been some discussion of making a project policy of dropping
support for old releases after five years. Should we consider formally
instituting that?
...
Heikki Linnakangas wrote:
Luke Lonergan wrote:
Vacuum is a better thing to run, much less CPU usage.
Vacuum is actually not good for this purpose, because it's been
special-cased to not bump the usage count.
Though the OS's page cache will still see it as accesses, no?
Joshua D. Drake wrote:
We develop and commit like normal *until* the community feels there is
enough for release. Then we announce a feature freeze.
I think you just described what will happen in reality regardless
of whatever is decided to be an official plan. :) I don't
think that's
Tom Lane wrote:
Joshua D. Drake [EMAIL PROTECTED] writes:
With respect to you Kevin, your managers should wait. You don't
install .0 releases of any software into production without months
of testing. At which point, normally a .1 release has come out anyway.
How exactly do you expect the
Brendan Jurd wrote:
Seems it would be best to apply this
nomenclature consistently, and simply drop the name postmaster from
use.
+1 I agree the term postmaster references in the docs, etc should
go away - with perhaps the exception of one faq that say that
postmaster's a deprecated name in
Tom Lane wrote:
Bruce Momjian [EMAIL PROTECTED] writes:
Why does text search need a tsquery data type? I realize it needs
tsvector so it can create indexes and updated trigger columns, but it
seems tsquery could instead just be a simple text string.
By that logic, we don't need any data
Tom Lane wrote:
Part of the reason for being conservative about changing here
is that we've got a mix of standard and nonstandard behaviors
A lot of this is legacy behavior that would never have passed muster
if it had been newly proposed in the last few years --- we have gotten
*far*
Merlin Moncure wrote:
On 8/21/07, Magnus Hagander [EMAIL PROTECTED] wrote:
OTOH, if we do it as a compat package, we need to set a firm end-date on
it, so we don't have to maintain it forever.
I would suggest making a pgfoundry project...that's what was done with
userlocks. I'm pretty
Joshua D. Drake wrote:
Tom Lane wrote:
Robert Treat [EMAIL PROTECTED] writes:
What exactly does default_text_search_config buy us? I think it is
supposed
to simplify things, but it sounds like it adds a bunch of corner cases,
Well, the main thing we'd lose if we remove it is all trace of
Magnus Hagander wrote:
I don't use the functional index part, but for new users I can see how
that's certainly a *lot* easier.
Can someone with modern hardware test to see if it's
still quite a bit slower than the extra column. I had
tried it too years ago; and found the functional index
to
Bruce Momjian wrote:
Oleg Bartunov wrote:
What is a basis of your assumption ? In my opinion, it's very limited
use of text search, because it doesn't supports ranking. For 4-5 years
of tsearch2 usage I never used it and I never seem in mailing lists.
This is very user-oriented feature and we
Bruce Momjian wrote:
Ron Mayer wrote:
Bruce Momjian wrote:
Oleg Bartunov wrote:
What is a basis of your assumption ?
I think I asked about this kind of usage a couple years back;...
http://archives.postgresql.org/pgsql-general/2005-10/msg00475.php
http://archives.postgresql.org/pgsql
Bruce Momjian wrote:
Ron Mayer wrote:
I wish I knew this myself. :-) Whatever I had done happened to work
but that was largely through people on IRC walking me through it.
This illustrates the major issue --- that this has to be simple for
people to get started, while keeping
Bruce Momjian wrote:
In talking to people who are assigned to review patches or could review
patches, I often get the reply, Oh, yea, I need to do that.
Would it inspire more people to learn enough to become patch
reviewers if patch authors scheduled walkthroughs of their
patches with question
Andrew Dunstan wrote:
Josh Berkus wrote:
I think that may be where we're heading. In that case, we may need to
talk about branching earlier so that developers can work on the new
version who are frozen out of the in-process one.
I've argued this in the past. But be aware that it will make
Josh Berkus wrote:
And then what? dynamically construct all your SQL queries?
Sure, sounds like a simple solution to me...
Not to mention DB security issues. How do you secure your database when
your web client has DDL access?
So, Edward, the really *interesting* idea would be to come
Bruce Momjian wrote:
My typical cycle is to take the patch, apply it to my tree, then cvs
diff and look at the diff, adjust the source, and rerun until I like the
diff and apply. How do I do that with this setup?
The most similar to what you're doing would be to
merge the patch's branch
Alvaro Herrera wrote:
Yes, it's nice. Consider this: Andrew develops some changes to PL/perl
in his branch. Neil doesn't like something in those changes, so he
commits a fix there.
If I understand right, another advantage is that the SCM will keep
track of which of those changes came from
Alvaro Herrera wrote:
Once autovacuum_naptime... autovacuum_max_workers...
How does this sound?
The knobs exposed on autovacuum feel kinda tangential to
what I think I'd really want to control.
IMHO vacuum_mbytes_per_second would be quite a bit more
intuitive than cost_delay, naptime, etc.
Hannu Krosing wrote:
...is storing all tuple visibility info in a separate file.
At first it seems to just add overhead, but for lots (most ? ) usecases
the separately stored visibility should be highly compressible, so for
example for bulk-loaded tables you could end up with one bit per
Andrew Dunstan wrote:
Joshua D. Drake wrote:
Where on the website can I see what plugins are included with
PostgreSQL?
YES! That's IMHO a more fundamental problem. The specific
question about Text Search seems more like a symptom. While
I don't mind Text Search in core, it seems an even
Gregory Stark wrote:
Actually no. A while back I did experiments to see how fast reading a file
sequentially was compared to reading the same file sequentially but skipping
x% of the blocks randomly. The results were surprising (to me) and depressing.
The breakeven point was about 7%. [...]
Matthew T. O'Connor wrote:
Alvaro Herrera wrote:
I'd like to hear other people's opinions on Darcy Buskermolen proposal
to have a log table, on which we'd register what did we run, at what
time, how long did it last, [...]
I think most people would just be happy if we could get autovacuum
Tom Lane wrote:
BTW, I'm thinking that a cost constant probably ought to be measured
in units of cpu_operator_cost. The default for built-in functions would
thus be 1, at least till such time as someone wants to refine the
estimates. We'd probably want the default for PL and SQL functions
Tom Lane wrote:
Bruce Momjian [EMAIL PROTECTED] writes:
What value is allowing multiple queies via PQexec()
The only argument I can think of is that it allows applications to be
sloppy about parsing a SQL script into individual commands before they
send it. (I think initdb may be guilty of
Gregory Stark wrote:
Bruce Momjian [EMAIL PROTECTED] writes:
I have a new idea. ...the BSD kernel...similar issue...to smooth writes:
Linux has a more complex solution to this (of course) which has undergone a
few generations over time. Older kernels had a user space daemon called
bdflush
community to make thoughtful suggestions to the glibc community?
Ron Peacetree
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
ITAGAKI Takahiro wrote:
Kevin Grittner [EMAIL PROTECTED] wrote:
...the file system cache will collapse repeated writes to the
same location until things settle ...
If we just push dirty pages out to the OS as soon as possible,
and let the file system do its job, I think we're in better
Jim C. Nasby wrote:
On usage, ISTM it would be better to turn on GIT only for a clustered
index and not the PK? I'm guessing your automatic case is intended for
SERIAL PKs, but maybe it would be better to just make that explicit.
Not necessarily; since often (in my tables at least) the data
by hosting companies (who seem to have the biggest problem
with contrib) and we wouldn't need as many discussions of which contribs
to move into core.
Ron M
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http
Zeugswetter Andreas ADI SD wrote:
POSIX_FADV_WILLNEED definitely sounds very interesting, but:
I think this interface was intended to hint larger areas (megabytes).
But the wishful thinking was not to hint seq scans, but to advise
single 8k pages.
Surely POSIX_FADV_SEQUENTIAL is the one
Mark Woodward wrote:
Exactly. IMHO, it is a frustrating environment. PostgreSQL is a great
system, and while I completely respect the individuals involved, I think
the management for lack of a better term, is difficult.
'course you're welcome to fork the project as well if your style
and/or
Andrew Sullivan wrote:
Just because I'm one of those statistics true believers, what sort of
information do you think it is possible for the DBA to take into
consideration, when building a hint, that could not in principle be
gathered efficiently by a statistics system? It seems to me that
Andrew - Supernews wrote:
Whether the underlying device lies about the write completion is another
matter. All current SCSI disks have WCE enabled by default, which means
that they will lie about write completion if FUA was not set in the
request, which FreeBSD never sets. (It's not possible
Simon Riggs wrote:
On Mon, 2006-09-11 at 06:20 -0700, Say42 wrote:
That's what I want to do:
1. Replace not very useful indexCorrelation with indexClustering.
An opinion such as not very useful isn't considered sufficient
explanation or justification for a change around here.
Not
Gregory Stark wrote:
Ron Mayer [EMAIL PROTECTED] writes:
...vastly overestimate the number of pages .. because postgresql's guess
at the correlation being practically 0 despite the fact that the distinct
values for any given column are closely packed on a few pages.
I think we need
Tom Lane wrote:
But a quick troll through the CVS logs shows ...
multi-row VALUES, not only for INSERT but everywhere SELECT is allowed ...
multi-argument aggregates, including SQL2003-standard statistical aggregates ...
standard_conforming_strings can be turned on (HUGE deal for some people)
release they're more likely to wait.
Ron
[1] http://archives.postgresql.org/pgsql-patches/2005-02/msg00171.php
---(end of broadcast)---
TIP 6: explain analyze is your friend
[EMAIL PROTECTED] wrote:
Ron Mayer wrote:
We have not had that many cases where lack of
communication was a problem.
One could say too much communication was the problem this time.
I get the impression people implied they'd do something on a TODO
and didn't. Arguably the project had been
Tom Lane wrote:
Ron Mayer [EMAIL PROTECTED] writes:
Would people be interested in a trivial patch that adds O_NOATIME
to open() for platforms that support it? (apparently Linux 2.6.8
and better).
Isn't that usually, and more portably, handled in the filesystem
mount options?
Yes to both
Tom Lane wrote:
Ron Mayer [EMAIL PROTECTED] writes:
Tom Lane wrote:
Isn't that usually, and more portably, handled in the filesystem
mount options?
Yes to both. I could imagine that for small systems/workstations
you might have some files that want access time, and others that
wanted
Tom Lane wrote:
Both of these pages say up front that they are considering read-only
data.
Can I assume read-mostly partitions could use the read-I/O
efficient indexes on update-intensive partitions of the same
table could use b-tree indexes?
All of my larger (90GB+) tables can have
Peter Eisentraut wrote:
Jim Nasby wrote:
The truth is, virtually no one, even highly technical people, ever
picks nits between kB vs KiB vs KB.
The question isn't so much whether to allow KiB and such -- that would
obviously be trivial. The question is whether we want to have kB mean
1000
Tom Lane wrote:
[EMAIL PROTECTED] writes:
Reading 1/4, for a larger table, has a good chance of being faster than
reading 4/4 of the table. :-)
Really?
If you have to hit one tuple out of four, it's pretty much guaranteed
that you will need to fetch every heap page.
I think it's not
Peter Eisentraut wrote:
I think it would be useful to allow units to be added to these settings, for
example...
shared_buffers = 512MB
which is a bit cumbersome to calculate right now (you'd need = 65536).
I haven't thought yet how to parse or implement this, but would people find
this
Peter Eisentraut wrote:
Hannu Krosing wrote:
So we would have
src/pl/pljava/README.TXT
and anybody looking for pl-s would find the info in a logical place
Right. When was the last time any user looked under src/pl in the first
place? Or even under src? If you're looking for pljava, it's
Tom Lane wrote:
The difference is that I will have reasonable confidence that
the README.TXT under src/pl will give instructions that match
the version of PostgreSQL that I have. I assume that README
will call out the version of PL/R or PL/Ruby that I want that
was tested with the release of
results.
(I note that this was not a problem for Tom since the
timing of his first and second runs were the same so
I assume he was just saying that he observed that the
query was cached rather than that the first run forced
the second run to be cached.)
Ron
Tom Lane wrote:
One objection to this is that after moving off the gold standard of
1.0 = one page fetch, there is no longer any clear meaning to the
cost estimate units; you're faced with the fact that they're just an
arbitrary scale. I'm not sure that's such a bad thing, though.
It seems
as the reasons why compressed filesystems aren't
very popular.
Has anyone tried running postgresql on a compressing file-system?
I'd expect the penalties to outweigh the benefits (or they'd be
more common); but if it gives impressive results, it might add
weight to this feature idea.
Ron M
I think
Jim C. Nasby wrote:
... how many pages per bit ...
Are we trying to set up a complex solution to a problem
that'll be mostly moot once partitioning is easier and
partitioned tables are common?
In many cases I can think of the bulk of the data would be in
old partitions that are practically
Where * ==
{print | save to PDF | save to mumble format | display on screen}
Anyone know of one?
TiA
Ron
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
of pivot essentially guarantees O(NlgN)
behavior no matter what the distribution of the data at the price of
increasing the cost of each pass by a constant factor (the generation
of a random number or numbers).
In sum, QuickSort gets all sorts of bad press that is far more FUD
than fact ITRW.
Ron
At 04:24 AM 2/17/2006, Ragnar wrote:
On fös, 2006-02-17 at 01:20 -0500, Ron wrote:
OK, so here's _a_ way (there are others) to obtain a mapping such that
if a b then f(a) f (b) and
if a == b then f(a) == f(b)
By scanning the table once, we can map say 001h (Hex used to ease
At 05:19 AM 2/17/2006, Markus Schaber wrote:
Hi, Ron,
Ron schrieb:
OK, so here's _a_ way (there are others) to obtain a mapping such that
if a b then f(a) f (b) and
if a == b then f(a) == f(b)
Pretend each row is a integer of row size (so a 2KB row becomes a 16Kb
integer; a 4KB row
At 10:53 AM 2/17/2006, Martijn van Oosterhout wrote:
On Fri, Feb 17, 2006 at 08:23:40AM -0500, Ron wrote:
For this mapping, you need a full table sort.
One physical IO pass should be all that's needed. However, let's
pretend you are correct and that we do need to sort the table to get
I assume we have such?
Ron
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
At 06:35 AM 2/16/2006, Steinar H. Gunderson wrote:
On Wed, Feb 15, 2006 at 11:30:54PM -0500, Ron wrote:
Even better (and more easily scaled as the number of GPR's in the CPU
changes) is to use
the set {L; L+1; L+2; t1; R-2; R-1; R}
This means that instead of 7 random memory accesses, we have
tuning methods that attempt to
directly address both issues.
Ron
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
At 09:48 AM 2/16/2006, Martijn van Oosterhout wrote:
On Thu, Feb 16, 2006 at 08:22:55AM -0500, Ron wrote:
3= Especially in modern systems where the gap between internal CPU
bandwidth and memory bandwidth is so great, the overhead of memory
accesses for comparisons and moves is the majority
At 10:52 AM 2/16/2006, Ron wrote:
At 09:48 AM 2/16/2006, Martijn van Oosterhout wrote:
Where this does become interesting is where we can convert a datum to
an integer such that if f(A) f(B) then A B. Then we can sort on
f(X) first with just integer comparisons and then do a full tuple
At 12:19 PM 2/16/2006, Scott Lamb wrote:
On Feb 16, 2006, at 8:32 AM, Ron wrote:
Let's pretend that we have the typical DB table where rows are
~2-4KB apiece. 1TB of storage will let us have 256M-512M rows in
such a table.
A 32b hash code can be assigned to each row value such that only
At 01:47 PM 2/16/2006, Ron wrote:
At 12:19 PM 2/16/2006, Scott Lamb wrote:
On Feb 16, 2006, at 8:32 AM, Ron wrote:
Let's pretend that we have the typical DB table where rows are
~2-4KB apiece. 1TB of storage will let us have 256M-512M rows in
such a table.
A 32b hash code can be assigned
This behavior is consistent with the pivot choosing algorithm
assuming certain distribution(s) for the data. For instance,
median-of-three partitioning is known to be pessimal when the data is
geometrically or hyper-geometrically distributed. Also, care must be
taken that sometimes is not
At 08:21 PM 2/15/2006, Tom Lane wrote:
Ron [EMAIL PROTECTED] writes:
How are we choosing our pivots?
See qsort.c: it looks like median of nine equally spaced inputs (ie,
the 1/8th points of the initial input array, plus the end points),
implemented as two rounds of median-of-three choices
Subject line says it all. I'm going to be testing changes under both
Linux and WinXP, so I'm hoping those of you that do M$ hacking will
pass along your list of suggestions and/or favorite (and hated so I
know what to avoid) tools.
TiA,
Ron
---(end of broadcast
that as a starting point.
Ron
[1] http://archives.postgresql.org/pgsql-patches/2003-09/msg00103.php
[1b] http://archives.postgresql.org/pgsql-patches/2003-09/msg00286.php
[2] http://archives.postgresql.org/pgsql-patches/2003-09/msg00122.php
[3] http://archives.postgresql.org/pgsql-patches/2003
://archives.postgresql.org/pgsql-patches/2003-12/msg00030.php
on Peter Eisentraut's recommendation to implement SQL standard intervals first.
Ron Mayer wrote:
Larry Rosenman wrote:
Michael Glaesemann wrote:
On Jan 8, 2006, at 12:12 , Larry Rosenman wrote:
I was thinking of handling the TODO
Joe Conway wrote:
Last time I thought about this problem, that's what I concluded. I don't
think there is a reasonable and backward compatible solution.
I also think the best non-compatible solution is to require non-numeric
elements to be delimited (double quotes, configurable?), and use
the performance drainage is?
We have to fix this.
Ron
-Original Message-
From: Luke Lonergan [EMAIL PROTECTED]
Sent: Oct 5, 2005 11:24 AM
To: Michael Stone [EMAIL PROTECTED], Martijn van Oosterhout
kleptog@svana.org
Cc: pgsql-hackers@postgresql.org, pgsql-performance@postgresql.org
is the code, but the code in isolation is often the Slow Path to
understanding with systems as complex as a DBMS IO layer.
Ron
-Original Message-
From: Joshua D. Drake [EMAIL PROTECTED]
Sent: Oct 5, 2005 1:18 PM
Subject: Re: [HACKERS] [PERFORM] A Better External Sort?
The source
part seems to have
been an useful and reasonable engineering discussion that has
exposed a number of important things.
Regards,
Ron
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
4, 2005 8:24 AM
To: Simon Riggs [EMAIL PROTECTED]
Cc: Tom Lane [EMAIL PROTECTED], Ron Peacetree [EMAIL PROTECTED],
pgsql-hackers@postgresql.org
Subject: Re: [HACKERS] [PERFORM] A Better External Sort?
On Tue, Oct 04, 2005 at 12:24:54PM +0100, Simon Riggs wrote:
How did you determine the 1500
Jeff, are those _burst_ rates from HD buffer or _sustained_ rates from
actual HD media? Rates from IO subsystem buffer or cache are
usually considerably higher than Average Sustained Transfer Rate.
Also, are you measuring _raw_ HD IO (bits straight off the platters, no
FS or other overhead) or
than that, do we
see any performance increase?
If a modest CPU can drive a DB IO rate of 25MBps, but that rate
does not go up regardless of how much extra CPU we throw at
it...
Ron
-Original Message-
From: Josh Berkus josh@agliodbs.com
Sent: Oct 3, 2005 6:03 PM
To: Jeffrey W. Baker [EMAIL
OK, change performance to single thread performance and we
still have a valid starting point for a discussion.
Ron
-Original Message-
From: Gregory Maxwell [EMAIL PROTECTED]
Sent: Oct 3, 2005 8:19 PM
To: Ron Peacetree [EMAIL PROTECTED]
Subject: Re: [HACKERS] [PERFORM] A Better External
do, _nothing_ is going to help us as much)
Ron
-Original Message-
From: Tom Lane [EMAIL PROTECTED]
Sent: Oct 1, 2005 2:01 AM
Subject: Re: [HACKERS] [PERFORM] A Better External Sort?
Jeffrey W. Baker [EMAIL PROTECTED] writes:
I think the largest speedup will be to dump the multiphase
201 - 300 of 368 matches
Mail list logo