On Thu, Sep 29, 2005 at 03:28:27PM +0200, Zeugswetter Andreas DAZ SD wrote:
In my original example, a sequential scan of the 1TB of 2KB
or 4KB records, = 250M or 500M records of data, being sorted
on a binary value key will take ~1000x more time than reading
in the ~1GB Btree I
On K, 2005-10-05 at 13:21 -0400, Ron Peacetree wrote:
First I wanted to verify that pg's IO rates were inferior to The Competition.
Now there's at least an indication that someone else has solved similar
problems. Existence proofs make some things easier ;-)
Is there any detailed programmer
On K, 2005-10-05 at 19:54 -0400, Ron Peacetree wrote:
+I made the from left field suggestion that perhaps a pg native fs
format would be worth consideration. This is a major project, so
the suggestion was to at least some extent tongue-in-cheek.
This idea is discussed about once a year on
Now I've asked for the quickest path to detailed
understanding of the pg IO subsystem. The goal being to get
more up to speed on its coding details. Certainly not to
annoy you or anyone else.
Basically pg does random 8k (compile time blocksize) reads/writes only.
Bitmap and sequential
On Wed, Oct 05, 2005 at 04:55:51PM -0700, Luke Lonergan wrote:
You've proven my point completely. This process is bottlenecked in the CPU.
The only way to improve it would be to optimize the system (libc) functions
like fread where it is spending most of it's time.
Or to optimize its IO
Andreas,
pg relys on the OS readahead (== larger block IO) to do efficient IO.
Basically the pg scan performance should match a dd if=file of=/dev/null
bs=8k,
unless CPU bound.
FWIW, we could improve performance by creating larger write blocks when
appropriate, particularly on Unixes like
On Wed, Oct 05, 2005 at 07:54:15PM -0400, Ron Peacetree wrote:
I asked some questions about physical layout and format translation
overhead being possibly suboptimal that seemed to be agreed to, but
specifics as to where we are taking the hit don't seem to have been
made explicit yet.
This
Andreas,
On 10/6/05 3:56 AM, Zeugswetter Andreas DAZ SD [EMAIL PROTECTED]
wrote:
pg relys on the OS readahead (== larger block IO) to do efficient IO.
Basically the pg scan performance should match a dd if=file of=/dev/null
bs=8k,
unless CPU bound.
Which it is. Postgres will currently do a
Martijn van Oosterhout kleptog@svana.org writes:
Indeed, one of the things on my list is to remove all the lseeks in
favour of pread. Halving the number of kernel calls has got to be worth
something right? Portability is an issue ofcourse...
Being sure that it's not a pessimization is another
Martijn van Oosterhout kleptog@svana.org writes:
Are we awfully worried about people still using 2.0 kernels? And it
would replace two calls with three in the worst case, we currently
lseek before every read.
That's utterly false.
regards, tom lane
On Thu, Oct 06, 2005 at 03:57:38PM -0400, Tom Lane wrote:
Martijn van Oosterhout kleptog@svana.org writes:
Indeed, one of the things on my list is to remove all the lseeks in
favour of pread. Halving the number of kernel calls has got to be worth
something right? Portability is an issue
On Thu, Oct 06, 2005 at 03:57:38PM -0400, Tom Lane wrote:
Martijn van Oosterhout kleptog@svana.org writes:
Indeed, one of the things on my list is to remove all the lseeks in
favour of pread. Halving the number of kernel calls has got to be worth
something right? Portability is an issue
On Thu, Oct 06, 2005 at 04:25:11PM -0400, Tom Lane wrote:
Martijn van Oosterhout kleptog@svana.org writes:
Are we awfully worried about people still using 2.0 kernels? And it
would replace two calls with three in the worst case, we currently
lseek before every read.
That's utterly false.
On Mon, Oct 03, 2005 at 01:34:01PM -0700, Josh Berkus wrote:
Realistically, you can't do better than about 25MB/s on a
single-threaded I/O on current Linux machines,
What on earth gives you that idea? Did you drop a zero?
Nope, LOTS of testing, at OSDL, GreenPlum and Sun. For comparison, A
On Wed, Oct 05, 2005 at 05:41:25AM -0400, Michael Stone wrote:
On Sat, Oct 01, 2005 at 06:19:41PM +0200, Martijn van Oosterhout wrote:
COPY TO /dev/null WITH binary
13MB/s55% user 45% system (ergo, CPU bound)
[snip]
the most expensive. But it does point out that the whole process is
On Sat, Oct 01, 2005 at 06:19:41PM +0200, Martijn van Oosterhout wrote:
COPY TO /dev/null WITH binary
13MB/s55% user 45% system (ergo, CPU bound)
[snip]
the most expensive. But it does point out that the whole process is
probably CPU bound more than anything else.
Note that 45% of that
On Tue, Oct 04, 2005 at 12:43:10AM +0300, Hannu Krosing wrote:
Just FYI, I run a count(*) on a 15.6GB table on a lightly loaded db and
it run in 163 sec. (Dual opteron 2.6GHz, 6GB RAM, 6 x 74GB 15k disks in
RAID10, reiserfs). A little less than 100MB sec.
And none of that 15G table is in the
@postgresql.org; pgsql-performance@postgresql.org
Subject:Re: [HACKERS] [PERFORM] A Better External Sort?
On Sat, Oct 01, 2005 at 06:19:41PM +0200, Martijn van Oosterhout wrote:
COPY TO /dev/null WITH binary
13MB/s55% user 45% system (ergo, CPU bound)
[snip]
the most expensive
Subject: Re: [HACKERS] [PERFORM] A Better External Sort?
Nope - it would be disk wait.
COPY is CPU bound on I/O subsystems faster that 50 MB/s on COPY (in) and about
15 MB/s (out).
- Luke
-Original Message-
From: Michael Stone [mailto:[EMAIL PROTECTED]
Sent: Wed Oct 05 09:58:41 2005
We have to fix this.
Ron
The source is freely available for your perusal. Please feel free to
point us in specific directions in the code where you may see some
benefit. I am positive all of us that can, would put resources into
fixing the issue had we a specific direction to attack.
is the code, but the code in isolation is often the Slow Path to
understanding with systems as complex as a DBMS IO layer.
Ron
-Original Message-
From: Joshua D. Drake [EMAIL PROTECTED]
Sent: Oct 5, 2005 1:18 PM
Subject: Re: [HACKERS] [PERFORM] A Better External Sort?
The source
On Wed, Oct 05, 2005 at 11:24:07AM -0400, Luke Lonergan wrote:
Nope - it would be disk wait.
I said I/O overhead; i.e., it could be the overhead of calling the
kernel for I/O's. E.g., the following process is having I/O problems:
time dd if=/dev/sdc of=/dev/null bs=1 count=1000
On Wed, 2005-10-05 at 12:14 -0400, Ron Peacetree wrote:
I've now gotten verification from multiple working DBA's that DB2, Oracle, and
SQL Server can achieve ~250MBps ASTR (with as much as ~500MBps ASTR in
setups akin to Oracle RAC) when attached to a decent (not outrageous, but
decent) HD
On 10/6/05, Michael Stone [EMAIL PROTECTED] wrote:
On Wed, Oct 05, 2005 at 11:24:07AM -0400, Luke Lonergan wrote:
Nope - it would be disk wait.
I said I/O overhead; i.e., it could be the overhead of calling the
kernel for I/O's. E.g., the following process is having I/O problems:
time dd
, but the code in isolation is often the Slow Path
to
understanding with systems as complex as a DBMS IO layer.
Ron
-Original Message-
From: Joshua D. Drake [EMAIL PROTECTED]
Sent: Oct 5, 2005 1:18 PM
Subject: Re: [HACKERS] [PERFORM] A Better External Sort?
The source is freely
I'm putting in as much time as I can afford thinking about pg related
performance issues. I'm doing it because of a sincere desire to help
understand and solve them, not to annoy people.
If I didn't believe in pg, I would't be posting thoughts about how to
make it better.
It's probably worth
Michael,
On 10/5/05 8:33 AM, Michael Stone [EMAIL PROTECTED] wrote:
real0m8.889s
user0m0.877s
sys 0m8.010s
it's not in disk wait state (in fact the whole read was cached) but it's
only getting 1MB/s.
You've proven my point completely. This process is bottlenecked in the
On Mon, Oct 03, 2005 at 10:51:32PM +0100, Simon Riggs wrote:
Basically, I recommend adding -Winline -finline-limit-1500 to the
default build while we discuss other options.
I add -Winline but get no warnings. Why would I use -finline-limit-1500?
I'm interested, but uncertain as to what
On Tue, 2005-10-04 at 12:04 +0200, Martijn van Oosterhout wrote:
On Mon, Oct 03, 2005 at 10:51:32PM +0100, Simon Riggs wrote:
Basically, I recommend adding -Winline -finline-limit-1500 to the
default build while we discuss other options.
I add -Winline but get no warnings. Why would I
On Tue, Oct 04, 2005 at 12:24:54PM +0100, Simon Riggs wrote:
How did you determine the 1500 figure? Can you give some more info to
surround that recommendation to allow everybody to evaluate it?
[EMAIL PROTECTED]:~/dl/cvs/pgsql-local/src/backend/utils/sort$ gcc
-finline-limit-1000 -Winline -O2
4, 2005 8:24 AM
To: Simon Riggs [EMAIL PROTECTED]
Cc: Tom Lane [EMAIL PROTECTED], Ron Peacetree [EMAIL PROTECTED],
pgsql-hackers@postgresql.org
Subject: Re: [HACKERS] [PERFORM] A Better External Sort?
On Tue, Oct 04, 2005 at 12:24:54PM +0100, Simon Riggs wrote:
How did you determine the 1500
Martijn van Oosterhout kleptog@svana.org writes:
A quick binary search puts the cutoff between 1200 and 1300. Given
version variation I picked a nice round number, 1500.
Ugh, that's for -O2, for -O3 and above it needs to be 4100 to work.
Maybe we should go for 5000 or so.
I'm using: gcc
On Tue, 2005-10-04 at 16:30 +0200, Martijn van Oosterhout wrote:
On Tue, Oct 04, 2005 at 10:06:24AM -0400, Tom Lane wrote:
Martijn van Oosterhout kleptog@svana.org writes:
I'm using: gcc (GCC) 3.3.5 (Debian 1:3.3.5-13)
I don't know what the units of this number are, but it's apparently
Martijn van Oosterhout kleptog@svana.org writes:
1. Add -Winline so we can at least be aware of when it's (not) happening.
Yeah, I agree with that part, just not with adding a fixed -finline-limit
value.
While on the subject of gcc warnings ... if I touch that code, I want to
remove
On Tue, Oct 04, 2005 at 10:06:24AM -0400, Tom Lane wrote:
Martijn van Oosterhout kleptog@svana.org writes:
I'm using: gcc (GCC) 3.3.5 (Debian 1:3.3.5-13)
I don't know what the units of this number are, but it's apparently far
too gcc-version-dependent to consider putting into our build
On Tue, Oct 04, 2005 at 03:56:53PM +0100, Simon Riggs wrote:
I've been using gcc 3.4 and saw no warning when using either -Winline
or -O3 -Winline.
Ok, I've just installed 3.4 and verified that. I examined the asm code
and gcc is inlining it. I concede, at this point just throw in -Winline
and
On Tue, Oct 04, 2005 at 05:23:41PM +0200, Martijn van Oosterhout wrote:
On Tue, Oct 04, 2005 at 03:56:53PM +0100, Simon Riggs wrote:
I've been using gcc 3.4 and saw no warning when using either -Winline
or -O3 -Winline.
Ok, I've just installed 3.4 and verified that. I examined the asm code
Michael,
Realistically, you can't do better than about 25MB/s on a
single-threaded I/O on current Linux machines,
What on earth gives you that idea? Did you drop a zero?
Nope, LOTS of testing, at OSDL, GreenPlum and Sun. For comparison, A
Big-Name Proprietary Database doesn't get much
Tom,
Raising work_mem to a gig should result in about five runs, needing only
one pass, which is really going to be as good as it gets. If you could
not see any difference then I see little hope for the idea that reducing
the number of merge passes will help.
Right. It *should have*, but
On Mon, 2005-10-03 at 13:34 -0700, Josh Berkus wrote:
Michael,
Realistically, you can't do better than about 25MB/s on a
single-threaded I/O on current Linux machines,
What on earth gives you that idea? Did you drop a zero?
Nope, LOTS of testing, at OSDL, GreenPlum and Sun. For
Jeff,
Nope, LOTS of testing, at OSDL, GreenPlum and Sun. For comparison, A
Big-Name Proprietary Database doesn't get much more than that either.
I find this claim very suspicious. I get single-threaded reads in
excess of 1GB/sec with XFS and 250MB/sec with ext3.
Database reads? Or
of your physical IO subsystem, but the concept is valid for _any_
physical IO subsystem.
-Original Message-
From: Jeffrey W. Baker [EMAIL PROTECTED]
Sent: Oct 3, 2005 4:42 PM
To: josh@agliodbs.com
Cc:
Subject: Re: [HACKERS] [PERFORM] A Better External Sort?
On Mon, 2005-10-03 at 13:34
Jeff, Josh,
On 10/3/05 2:16 PM, Josh Berkus josh@agliodbs.com wrote:
Jeff,
Nope, LOTS of testing, at OSDL, GreenPlum and Sun. For comparison, A
Big-Name Proprietary Database doesn't get much more than that either.
I find this claim very suspicious. I get single-threaded reads in
On Mon, 2005-10-03 at 14:16 -0700, Josh Berkus wrote:
Jeff,
Nope, LOTS of testing, at OSDL, GreenPlum and Sun. For comparison, A
Big-Name Proprietary Database doesn't get much more than that either.
I find this claim very suspicious. I get single-threaded reads in
excess of
On E, 2005-10-03 at 14:16 -0700, Josh Berkus wrote:
Jeff,
Nope, LOTS of testing, at OSDL, GreenPlum and Sun. For comparison, A
Big-Name Proprietary Database doesn't get much more than that either.
I find this claim very suspicious. I get single-threaded reads in
excess of 1GB/sec
On Sun, 2005-10-02 at 21:38 +0200, Martijn van Oosterhout wrote:
Ok, I tried two optimisations:
2. By specifying: -Winline -finline-limit-1500 (only on tuplesort.c).
This causes inlineApplySortFunction() to be inlined, like the code
obviously expects it to be.
default build (baseline)
Hannu,
On 10/3/05 2:43 PM, Hannu Krosing [EMAIL PROTECTED] wrote:
Just FYI, I run a count(*) on a 15.6GB table on a lightly loaded db and
it run in 163 sec. (Dual opteron 2.6GHz, 6GB RAM, 6 x 74GB 15k disks in
RAID10, reiserfs). A little less than 100MB sec.
This confirms our findings -
Michael,
Nope, LOTS of testing, at OSDL, GreenPlum and Sun. For comparison, A
Big-Name Proprietary Database doesn't get much more than that either.
You seem to be talking about database IO, which isn't what you said.
Right, well, it was what I meant. I failed to specify, that's all.
--
Jeffrey,
I guess database reads are different, but I remain unconvinced that they
are *fundamentally* different. After all, a tab-delimited file (my sort
workload) is a kind of database.
Unfortunately, they are ... because of CPU overheads. I'm basing what's
reasonable for data writes on
PROTECTED]
Cc:
Subject: Re: [HACKERS] [PERFORM] A Better External Sort?
Jeffrey,
I guess database reads are different, but I remain unconvinced that they
are *fundamentally* different. After all, a tab-delimited file (my sort
workload) is a kind of database.
Unfortunately
On 10/3/05, Ron Peacetree [EMAIL PROTECTED] wrote:
[snip]
Just how bad is this CPU bound condition? How powerful a CPU is
needed to attain a DB IO rate of 25MBps?
If we replace said CPU with one 2x, 10x, etc faster than that, do we
see any performance increase?
If a modest CPU can drive a
OK, change performance to single thread performance and we
still have a valid starting point for a discussion.
Ron
-Original Message-
From: Gregory Maxwell [EMAIL PROTECTED]
Sent: Oct 3, 2005 8:19 PM
To: Ron Peacetree [EMAIL PROTECTED]
Subject: Re: [HACKERS] [PERFORM] A Better External
On Sat, Oct 01, 2005 at 11:26:07PM -0400, Tom Lane wrote:
Martijn van Oosterhout kleptog@svana.org writes:
Anyway, to bring some real info I just profiled PostgreSQL 8.1beta
doing an index create on a 2960296 row table (3 columns, table size
317MB).
3 columns in the index you mean? What
Ok, I tried two optimisations:
1. By creating a special version of comparetup_index for single key
integer indexes. Create an index_get_attr with byval and len args. By
using fetch_att and specifying the values at compile time, gcc
optimises the whole call to about 12 instructions of assembly
Jeffrey W. Baker [EMAIL PROTECTED] writes:
I think the largest speedup will be to dump the multiphase merge and
merge all tapes in one pass, no matter how large M. Currently M is
capped at 6, so a sort of 60GB with 1GB sort memory needs 13 passes over
the tape. It could be done in a single
: Re: [HACKERS] [PERFORM] A Better External Sort?
Jeffrey W. Baker [EMAIL PROTECTED] writes:
I think the largest speedup will be to dump the multiphase merge and
merge all tapes in one pass, no matter how large M. Currently M is
capped at 6, so a sort of 60GB with 1GB sort memory needs 13
On Fri, 2005-09-30 at 13:41 -0700, Josh Berkus wrote:
Yeah, that's what I thought too. But try sorting an 10GB table, and
you'll see: disk I/O is practically idle, while CPU averages 90%+. We're
CPU-bound, because sort is being really inefficient about something. I
just don't know what
On Sat, 2005-10-01 at 02:01 -0400, Tom Lane wrote:
Jeffrey W. Baker [EMAIL PROTECTED] writes:
I think the largest speedup will be to dump the multiphase merge and
merge all tapes in one pass, no matter how large M. Currently M is
capped at 6, so a sort of 60GB with 1GB sort memory needs 13
On Fri, Sep 30, 2005 at 01:41:22PM -0700, Josh Berkus wrote:
Realistically, you can't do better than about 25MB/s on a single-threaded
I/O on current Linux machines,
What on earth gives you that idea? Did you drop a zero?
Mike Stone
---(end of
On R, 2005-09-30 at 13:38 -0700, Luke Lonergan wrote:
Bulk loading speed is irrelevant here - that is dominated by parsing, which
we have covered copiously (har har) previously and have sped up by 500%,
which still makes Postgres 1/2 the loading speed of MySQL.
Is this 1/2 of MySQL with
do, _nothing_ is going to help us as much)
Ron
-Original Message-
From: Tom Lane [EMAIL PROTECTED]
Sent: Oct 1, 2005 2:01 AM
Subject: Re: [HACKERS] [PERFORM] A Better External Sort?
Jeffrey W. Baker [EMAIL PROTECTED] writes:
I think the largest speedup will be to dump the multiphase
Ron Peacetree wrote:
The good news is all this means it's easy to demonstrate that we can
improve the performance of our sorting functionality.
Assuming we get the abyssmal physical IO performance fixed...
(because until we do, _nothing_ is going to help us as much)
I for one would be
Josh Berkus josh@agliodbs.com writes:
The biggest single area where I see PostgreSQL external sort sucking is
on index creation on large tables. For example, for free version of
TPCH, it takes only 1.5 hours to load a 60GB Lineitem table on OSDL's
hardware, but over 3 hours to create
Tom Lane [EMAIL PROTECTED] writes:
Jeffrey W. Baker [EMAIL PROTECTED] writes:
I think the largest speedup will be to dump the multiphase merge and
merge all tapes in one pass, no matter how large M. Currently M is
capped at 6, so a sort of 60GB with 1GB sort memory needs 13 passes over
On Sat, Oct 01, 2005 at 10:22:40AM -0400, Ron Peacetree wrote:
Assuming we get the abyssmal physical IO performance fixed...
(because until we do, _nothing_ is going to help us as much)
I'm still not convinced this is the major problem. For example, in my
totally unscientific tests on an oldish
-Original Message-
From: Andrew Dunstan [EMAIL PROTECTED]
Sent: Oct 1, 2005 11:19 AM
To: Ron Peacetree [EMAIL PROTECTED]
Subject: Re: [HACKERS] [PERFORM] A Better External Sort?
Ron Peacetree wrote:
The good news is all this means it's easy to demonstrate that we can
improve
@svana.org
Sent: Oct 1, 2005 12:19 PM
Subject: Re: [HACKERS] [PERFORM] A Better External Sort?
On Sat, Oct 01, 2005 at 10:22:40AM -0400, Ron Peacetree wrote:
Assuming we get the abyssmal physical IO performance fixed...
(because until we do, _nothing_ is going to help us as much)
I'm still
[removed -performance, not subscribed]
On Sat, Oct 01, 2005 at 01:42:32PM -0400, Ron Peacetree wrote:
You have not said anything about what HW, OS version, and pg version
used here, but even at that can't you see that something Smells Wrong?
Somewhat old machine running 7.3 on Linux 2.4. Not
Martijn van Oosterhout kleptog@svana.org writes:
Anyway, to bring some real info I just profiled PostgreSQL 8.1beta
doing an index create on a 2960296 row table (3 columns, table size
317MB).
3 columns in the index you mean? What were the column datatypes?
Any null values?
The number 1
From: Pailloncy Jean-Gerard [EMAIL PROTECTED]
Sent: Sep 29, 2005 7:11 AM
Subject: Re: [HACKERS] [PERFORM] A Better External Sort?
Jeff Baker:
Your main example seems to focus on a large table where a key
column has constrained values. This case is interesting in
proportion to the number
From: Zeugswetter Andreas DAZ SD [EMAIL PROTECTED]
Sent: Sep 29, 2005 9:28 AM
Subject: RE: [HACKERS] [PERFORM] A Better External Sort?
In my original example, a sequential scan of the 1TB of 2KB
or 4KB records, = 250M or 500M records of data, being sorted
on a binary value key will take ~1000x
From: Josh Berkus josh@agliodbs.com
Sent: Sep 29, 2005 12:54 PM
Subject: Re: [HACKERS] [PERFORM] A Better External Sort?
The biggest single area where I see PostgreSQL external
sort sucking is on index creation on large tables. For
example, for free version of TPCH, it takes only 1.5 hours
Just to add a little anarchy in your nice debate...
Who really needs all the results of a sort on your terabyte table ?
I guess not many people do a SELECT from such a table and want all the
results. So, this leaves :
- Really wanting all the results, to fetch using
Ron,
Hmmm.
60GB/5400secs= 11MBps. That's ssllooww. So the first
problem is evidently our physical layout and/or HD IO layer
sucks.
Actually, it's much worse than that, because the sort is only dealing
with one column. As I said, monitoring the iostat our top speed was
2.2mb/s.
--Josh
-Original Message-
From: [EMAIL PROTECTED] [mailto:pgsql-hackers-
[EMAIL PROTECTED] On Behalf Of PFC
Sent: Thursday, September 29, 2005 9:10 AM
To: [EMAIL PROTECTED]
Cc: Pg Hackers; pgsql-performance@postgresql.org
Subject: Re: [HACKERS] [PERFORM] A Better External Sort
Ron,
That 11MBps was your =bulk load= speed. If just loading a table
is this slow, then there are issues with basic physical IO, not just
IO during sort operations.
Oh, yeah. Well, that's separate from sort. See multiple posts on this
list from the GreenPlum team, the COPY patch for 8.1,
Ron,
On 9/30/05 1:20 PM, Ron Peacetree [EMAIL PROTECTED] wrote:
That 11MBps was your =bulk load= speed. If just loading a table
is this slow, then there are issues with basic physical IO, not just
IO during sort operations.
Bulk loading speed is irrelevant here - that is dominated by
I see the following routines that seem to be related to sorting.
If I were to examine these routines to consider ways to improve it, what
routines should I key in on? I am guessing that tuplesort.c is the hub
of activity for database sorting.
Directory of
Bulk loading speed is irrelevant here - that is dominated by parsing,
which
we have covered copiously (har har) previously and have sped up by 500%,
which still makes Postgres 1/2 the loading speed of MySQL.
Let's ask MySQL 4.0
LOAD DATA INFILE blah
0 errors, 666 warnings
SHOW
-hackers@postgresql.org, pgsql-performance@postgresql.org
Subject: Re: [HACKERS] [PERFORM] A Better External Sort?
Ron,
Hmmm.
60GB/5400secs= 11MBps. That's ssllooww. So the first
problem is evidently our physical layout and/or HD IO layer
sucks.
Actually, it's much worse than that, because
josh@agliodbs.com
Sent: Sep 30, 2005 1:23 PM
To: Ron Peacetree [EMAIL PROTECTED]
Cc: pgsql-hackers@postgresql.org, pgsql-performance@postgresql.org
Subject: Re: [HACKERS] [PERFORM] A Better External Sort?
Ron,
Hmmm.
60GB/5400secs= 11MBps. That's ssllooww. So the first
problem is evidently
josh@agliodbs.com
Sent: Sep 30, 2005 4:41 PM
To: Ron Peacetree [EMAIL PROTECTED]
Cc: pgsql-hackers@postgresql.org, pgsql-performance@postgresql.org
Subject: Re: [HACKERS] [PERFORM] A Better External Sort?
Ron,
That 11MBps was your =bulk load= speed. If just loading a table
is this slow
-
[EMAIL PROTECTED] On Behalf Of Jignesh K. Shah
Sent: Friday, September 30, 2005 1:38 PM
To: Ron Peacetree
Cc: Josh Berkus; pgsql-hackers@postgresql.org; pgsql-
[EMAIL PROTECTED]
Subject: Re: [HACKERS] [PERFORM] A Better External Sort?
I have seen similar performance as Josh and my
On 9/30/05, Ron Peacetree [EMAIL PROTECTED] wrote:
4= I'm sure we are paying all sorts of nasty overhead for essentially
emulating the pg filesystem inside another filesystem. That means
~2x as much overhead to access a particular piece of data.
The simplest solution is for us to implement a
On 9/28/05, Ron Peacetree [EMAIL PROTECTED] wrote:
2= We use my method to sort two different tables. We now have these
very efficient representations of a specific ordering on these tables. A
join operation can now be done using these Btrees rather than the
original data tables that involves
]
Subject: Re: [HACKERS] [PERFORM] A Better External Sort?
On 9/28/05, Ron Peacetree [EMAIL PROTECTED] wrote:
2= We use my method to sort two different tables. We now have these
very efficient representations of a specific ordering on these
tables.
A
join operation can now be done using
Your main example seems to focus on a large table where a key
column has
constrained values. This case is interesting in proportion to the
number of possible values. If I have billions of rows, each
having one
of only two values, I can think of a trivial and very fast method of
returning
to the absolute minimum
was one of the design goals.
Reducing the total amount of IO to the absolute minimum should
help as well.
Ron
-Original Message-
From: Kevin Grittner [EMAIL PROTECTED]
Sent: Sep 27, 2005 11:21 AM
Subject: Re: [HACKERS] [PERFORM] A Better External Sort?
I can't help
From: Jeffrey W. Baker [EMAIL PROTECTED]
Sent: Sep 29, 2005 12:27 AM
To: Ron Peacetree [EMAIL PROTECTED]
Cc: pgsql-hackers@postgresql.org, pgsql-performance@postgresql.org
Subject: Re: [HACKERS] [PERFORM] A Better External Sort?
You are engaging in a length and verbose exercise in mental
From: Jeffrey W. Baker [EMAIL PROTECTED]
Sent: Sep 27, 2005 1:26 PM
To: Ron Peacetree [EMAIL PROTECTED]
Subject: Re: [HACKERS] [PERFORM] A Better External Sort?
On Tue, 2005-09-27 at 13:15 -0400, Ron Peacetree wrote:
That Btree can be used to generate a physical reordering of the data
in one
In the interest of efficiency and not reinventing the wheel, does anyone know
where I can find C or C++ source code for a Btree variant with the following
properties:
A= Data elements (RIDs) are only stored in the leaves, Keys (actually
KeyPrefixes; see D below) and Node pointers are only stored
In my original example, a sequential scan of the 1TB of 2KB
or 4KB records, = 250M or 500M records of data, being sorted
on a binary value key will take ~1000x more time than reading
in the ~1GB Btree I described that used a Key+RID (plus node
pointers) representation of the data.
Imho
Jeff, Ron,
First off, Jeff, please take it easy. We're discussing 8.2 features at
this point and there's no reason to get stressed out at Ron. You can
get plenty stressed out when 8.2 is near feature freeze. ;-)
Regarding use cases for better sorts:
The biggest single area where I see
Josh,
On 9/29/05 9:54 AM, Josh Berkus josh@agliodbs.com wrote:
Following an index creation, we see that 95% of the time required is the
external sort, which averages 2mb/s. This is with seperate drives for
the WAL, the pg_tmp, the table and the index. I've confirmed that
increasing
On Thu, Sep 29, 2005 at 10:06:52AM -0700, Luke Lonergan wrote:
Josh,
On 9/29/05 9:54 AM, Josh Berkus josh@agliodbs.com wrote:
Following an index creation, we see that 95% of the time required
is the external sort, which averages 2mb/s. This is with seperate
drives for the WAL, the
On Thu, 2005-09-29 at 10:06 -0700, Luke Lonergan wrote:
Josh,
On 9/29/05 9:54 AM, Josh Berkus josh@agliodbs.com wrote:
Following an index creation, we see that 95% of the time required is the
external sort, which averages 2mb/s. This is with seperate drives for
the WAL, the pg_tmp,
Jeff,
Josh, do you happen to know how many passes are needed in the multiphase
merge on your 60GB table?
No, any idea how to test that?
I think the largest speedup will be to dump the multiphase merge and
merge all tapes in one pass, no matter how large M. Currently M is
capped at 6, so a
On Thu, 2005-09-29 at 11:03 -0700, Josh Berkus wrote:
Jeff,
Josh, do you happen to know how many passes are needed in the multiphase
merge on your 60GB table?
No, any idea how to test that?
I would just run it under the profiler and see how many times
beginmerge() is called.
-jwb
Jeff,
I would just run it under the profiler and see how many times
beginmerge() is called.
Hmm, I'm not seeing it at all in the oprofile results on a 100million-row
sort.
--
--Josh
Josh Berkus
Aglio Database Solutions
San Francisco
---(end of
: Jeffrey W. Baker
Subject: Re: [HACKERS] [PERFORM] A Better External Sort?
Jeff,
I would just run it under the profiler and see how many times
beginmerge() is called.
Hmm, I'm not seeing it at all in the oprofile results on a
100million-row
sort.
--
--Josh
Josh Berkus
Aglio
1 - 100 of 113 matches
Mail list logo