On Thu, Sep 29, 2005 at 03:28:27PM +0200, Zeugswetter Andreas DAZ SD wrote:
>
> > In my original example, a sequential scan of the 1TB of 2KB
> > or 4KB records, => 250M or 500M records of data, being sorted
> > on a binary value key will take ~1000x more time than reading
> > in the ~1GB Btree
On Thu, Oct 06, 2005 at 04:25:11PM -0400, Tom Lane wrote:
> Martijn van Oosterhout writes:
> > Are we awfully worried about people still using 2.0 kernels? And it
> > would replace two calls with three in the worst case, we currently
> > lseek before every read.
>
> That's utterly false.
Oops, y
On Thu, Oct 06, 2005 at 03:57:38PM -0400, Tom Lane wrote:
> Martijn van Oosterhout writes:
> > Indeed, one of the things on my list is to remove all the lseeks in
> > favour of pread. Halving the number of kernel calls has got to be worth
> > something right? Portability is an issue ofcourse...
>
On Thu, Oct 06, 2005 at 03:57:38PM -0400, Tom Lane wrote:
> Martijn van Oosterhout writes:
> > Indeed, one of the things on my list is to remove all the lseeks in
> > favour of pread. Halving the number of kernel calls has got to be worth
> > something right? Portability is an issue ofcourse...
>
Martijn van Oosterhout writes:
> Are we awfully worried about people still using 2.0 kernels? And it
> would replace two calls with three in the worst case, we currently
> lseek before every read.
That's utterly false.
regards, tom lane
---(end of
Martijn van Oosterhout writes:
> Indeed, one of the things on my list is to remove all the lseeks in
> favour of pread. Halving the number of kernel calls has got to be worth
> something right? Portability is an issue ofcourse...
Being sure that it's not a pessimization is another issue. I note
Andreas,
On 10/6/05 3:56 AM, "Zeugswetter Andreas DAZ SD" <[EMAIL PROTECTED]>
wrote:
> pg relys on the OS readahead (== larger block IO) to do efficient IO.
> Basically the pg scan performance should match a dd if=file of=/dev/null
> bs=8k,
> unless CPU bound.
Which it is. Postgres will current
On Wed, Oct 05, 2005 at 07:54:15PM -0400, Ron Peacetree wrote:
> I asked some questions about physical layout and format translation
> overhead being possibly suboptimal that seemed to be agreed to, but
> specifics as to where we are taking the hit don't seem to have been
> made explicit yet.
This
Andreas,
pg relys on the OS readahead (== larger block IO) to do efficient IO.
Basically the pg scan performance should match a dd if=file of=/dev/null
bs=8k,
unless CPU bound.
FWIW, we could improve performance by creating larger write blocks when
appropriate, particularly on Unixes like Sol
On Wed, Oct 05, 2005 at 04:55:51PM -0700, Luke Lonergan wrote:
You've proven my point completely. This process is bottlenecked in the CPU.
The only way to improve it would be to optimize the system (libc) functions
like "fread" where it is spending most of it's time.
Or to optimize its IO hand
> Now I've asked for the quickest path to detailed
> understanding of the pg IO subsystem. The goal being to get
> more up to speed on its coding details. Certainly not to
> annoy you or anyone else.
Basically pg does random 8k (compile time blocksize) reads/writes only.
Bitmap and sequentia
On K, 2005-10-05 at 19:54 -0400, Ron Peacetree wrote:
> +I made the "from left field" suggestion that perhaps a pg native fs
> format would be worth consideration. This is a major project, so
> the suggestion was to at least some extent tongue-in-cheek.
This idea is discussed about once a year o
On K, 2005-10-05 at 13:21 -0400, Ron Peacetree wrote:
> First I wanted to verify that pg's IO rates were inferior to The Competition.
> Now there's at least an indication that someone else has solved similar
> problems. Existence proofs make some things easier ;-)
>
> Is there any detailed progra
Michael,
On 10/5/05 8:33 AM, "Michael Stone" <[EMAIL PROTECTED]> wrote:
> real0m8.889s
> user0m0.877s
> sys 0m8.010s
>
> it's not in disk wait state (in fact the whole read was cached) but it's
> only getting 1MB/s.
You've proven my point completely. This process is bottlenecked
I'm putting in as much time as I can afford thinking about pg related
performance issues. I'm doing it because of a sincere desire to help
understand and solve them, not to annoy people.
If I didn't believe in pg, I would't be posting thoughts about how to
make it better.
It's probably worth s
level architectual doc set for pg? I know
> "the best doc is the code", but the code in isolation is often the Slow Path
> to
> understanding with systems as complex as a DBMS IO layer.
>
> Ron
>
>
> -Original Message-
> From: "Joshua D. Drake
On 10/6/05, Michael Stone <[EMAIL PROTECTED]> wrote:
> On Wed, Oct 05, 2005 at 11:24:07AM -0400, Luke Lonergan wrote:
> >Nope - it would be disk wait.
>
> I said I/O overhead; i.e., it could be the overhead of calling the
> kernel for I/O's. E.g., the following process is having I/O problems:
>
> t
On Wed, 2005-10-05 at 12:14 -0400, Ron Peacetree wrote:
> I've now gotten verification from multiple working DBA's that DB2, Oracle, and
> SQL Server can achieve ~250MBps ASTR (with as much as ~500MBps ASTR in
> setups akin to Oracle RAC) when attached to a decent (not outrageous, but
> decent) HD
On Wed, Oct 05, 2005 at 11:24:07AM -0400, Luke Lonergan wrote:
Nope - it would be disk wait.
I said I/O overhead; i.e., it could be the overhead of calling the
kernel for I/O's. E.g., the following process is having I/O problems:
time dd if=/dev/sdc of=/dev/null bs=1 count=1000
ot;the best doc is the code", but the code in isolation is often the Slow Path to
understanding with systems as complex as a DBMS IO layer.
Ron
-Original Message-
From: "Joshua D. Drake" <[EMAIL PROTECTED]>
Sent: Oct 5, 2005 1:18 PM
Subject: Re: [HACKERS] [PERFORM] A B
> We have to fix this.
> Ron
>
The source is freely available for your perusal. Please feel free to
point us in specific directions in the code where you may see some
benefit. I am positive all of us that can, would put resources into
fixing the issue had we a specific direction to attack.
S
ent: Wed Oct 05 09:58:41 2005
To: Martijn van Oosterhout
Cc: pgsql-hackers@postgresql.org; pgsql-performance@postgresql.org
Subject: Re: [HACKERS] [PERFORM] A Better External Sort?
On Sat, Oct 01, 2005 at 06:19:41PM +0200, Martijn van Oosterhout wrote:
>COPY TO /dev/null WITH
@postgresql.org; pgsql-performance@postgresql.org
Subject:Re: [HACKERS] [PERFORM] A Better External Sort?
On Sat, Oct 01, 2005 at 06:19:41PM +0200, Martijn van Oosterhout wrote:
>COPY TO /dev/null WITH binary
>13MB/s55% user 45% system (ergo, CPU bound)
[snip]
>the most expensiv
On Tue, Oct 04, 2005 at 12:43:10AM +0300, Hannu Krosing wrote:
Just FYI, I run a count(*) on a 15.6GB table on a lightly loaded db and
it run in 163 sec. (Dual opteron 2.6GHz, 6GB RAM, 6 x 74GB 15k disks in
RAID10, reiserfs). A little less than 100MB sec.
And none of that 15G table is in the 6
On Sat, Oct 01, 2005 at 06:19:41PM +0200, Martijn van Oosterhout wrote:
COPY TO /dev/null WITH binary
13MB/s55% user 45% system (ergo, CPU bound)
[snip]
the most expensive. But it does point out that the whole process is
probably CPU bound more than anything else.
Note that 45% of that c
On Wed, Oct 05, 2005 at 05:41:25AM -0400, Michael Stone wrote:
> On Sat, Oct 01, 2005 at 06:19:41PM +0200, Martijn van Oosterhout wrote:
> >COPY TO /dev/null WITH binary
> >13MB/s55% user 45% system (ergo, CPU bound)
> [snip]
> >the most expensive. But it does point out that the whole process
On K, 2005-10-05 at 05:43 -0400, Michael Stone wrote:
> On Tue, Oct 04, 2005 at 12:43:10AM +0300, Hannu Krosing wrote:
> >Just FYI, I run a count(*) on a 15.6GB table on a lightly loaded db and
> >it run in 163 sec. (Dual opteron 2.6GHz, 6GB RAM, 6 x 74GB 15k disks in
> >RAID10, reiserfs). A littl
On Mon, Oct 03, 2005 at 01:34:01PM -0700, Josh Berkus wrote:
>Realistically, you can't do better than about 25MB/s on a
> single-threaded I/O on current Linux machines,
What on earth gives you that idea? Did you drop a zero?
Nope, LOTS of testing, at OSDL, GreenPlum and Sun. For comparison,
On Tue, Oct 04, 2005 at 05:23:41PM +0200, Martijn van Oosterhout wrote:
> On Tue, Oct 04, 2005 at 03:56:53PM +0100, Simon Riggs wrote:
> > I've been using gcc 3.4 and saw no warning when using either "-Winline"
> > or "-O3 -Winline".
> Ok, I've just installed 3.4 and verified that. I examined the a
On Tue, Oct 04, 2005 at 03:56:53PM +0100, Simon Riggs wrote:
> I've been using gcc 3.4 and saw no warning when using either "-Winline"
> or "-O3 -Winline".
Ok, I've just installed 3.4 and verified that. I examined the asm code
and gcc is inlining it. I concede, at this point just throw in -Winline
On Tue, Oct 04, 2005 at 10:06:24AM -0400, Tom Lane wrote:
> Martijn van Oosterhout writes:
> > I'm using: gcc (GCC) 3.3.5 (Debian 1:3.3.5-13)
>
> I don't know what the units of this number are, but it's apparently far
> too gcc-version-dependent to consider putting into our build scripts.
> Using
Martijn van Oosterhout writes:
> 1. Add -Winline so we can at least be aware of when it's (not) happening.
Yeah, I agree with that part, just not with adding a fixed -finline-limit
value.
While on the subject of gcc warnings ... if I touch that code, I want to
remove -Wold-style-definition from
On Tue, 2005-10-04 at 16:30 +0200, Martijn van Oosterhout wrote:
> On Tue, Oct 04, 2005 at 10:06:24AM -0400, Tom Lane wrote:
> > Martijn van Oosterhout writes:
> > > I'm using: gcc (GCC) 3.3.5 (Debian 1:3.3.5-13)
> >
> > I don't know what the units of this number are, but it's apparently far
> >
Martijn van Oosterhout writes:
> A quick binary search puts the cutoff between 1200 and 1300. Given
> version variation I picked a nice round number, 1500.
> Ugh, that's for -O2, for -O3 and above it needs to be 4100 to work.
> Maybe we should go for 5000 or so.
> I'm using: gcc (GCC) 3.3.5 (Deb
t: Oct 4, 2005 8:24 AM
To: Simon Riggs <[EMAIL PROTECTED]>
Cc: Tom Lane <[EMAIL PROTECTED]>, Ron Peacetree <[EMAIL PROTECTED]>,
pgsql-hackers@postgresql.org
Subject: Re: [HACKERS] [PERFORM] A Better External Sort?
On Tue, Oct 04, 2005 at 12:24:54PM +0100, Simon Riggs wrote:
>
On Tue, Oct 04, 2005 at 12:24:54PM +0100, Simon Riggs wrote:
> How did you determine the 1500 figure? Can you give some more info to
> surround that recommendation to allow everybody to evaluate it?
[EMAIL PROTECTED]:~/dl/cvs/pgsql-local/src/backend/utils/sort$ gcc
-finline-limit-1000 -Winline -O
On Tue, 2005-10-04 at 12:04 +0200, Martijn van Oosterhout wrote:
> On Mon, Oct 03, 2005 at 10:51:32PM +0100, Simon Riggs wrote:
> > > Basically, I recommend adding "-Winline -finline-limit-1500" to the
> > > default build while we discuss other options.
> >
> > I add -Winline but get no warnings.
On Mon, Oct 03, 2005 at 10:51:32PM +0100, Simon Riggs wrote:
> > Basically, I recommend adding "-Winline -finline-limit-1500" to the
> > default build while we discuss other options.
>
> I add -Winline but get no warnings. Why would I use -finline-limit-1500?
>
> I'm interested, but uncertain as
OK, change "performance" to "single thread performance" and we
still have a valid starting point for a discussion.
Ron
-Original Message-
From: Gregory Maxwell <[EMAIL PROTECTED]>
Sent: Oct 3, 2005 8:19 PM
To: Ron Peacetree <[EMAIL PROTECTED]>
Subject:
On 10/3/05, Ron Peacetree <[EMAIL PROTECTED]> wrote:
[snip]
> Just how bad is this CPU bound condition? How powerful a CPU is
> needed to attain a DB IO rate of 25MBps?
>
> If we replace said CPU with one 2x, 10x, etc faster than that, do we
> see any performance increase?
>
> If a modest CPU can
t; <[EMAIL PROTECTED]>
Cc:
Subject: Re: [HACKERS] [PERFORM] A Better External Sort?
Jeffrey,
> I guess database reads are different, but I remain unconvinced that they
> are *fundamentally* different. After all, a tab-delimited file (my sort
> workload) is a kind of database.
Unfortu
Jeffrey,
> I guess database reads are different, but I remain unconvinced that they
> are *fundamentally* different. After all, a tab-delimited file (my sort
> workload) is a kind of database.
Unfortunately, they are ... because of CPU overheads. I'm basing what's
"reasonable" for data writes
Michael,
> >Nope, LOTS of testing, at OSDL, GreenPlum and Sun. For comparison, A
> >Big-Name Proprietary Database doesn't get much more than that either.
>
> You seem to be talking about database IO, which isn't what you said.
Right, well, it was what I meant. I failed to specify, that's all.
Hannu,
On 10/3/05 2:43 PM, "Hannu Krosing" <[EMAIL PROTECTED]> wrote:
> Just FYI, I run a count(*) on a 15.6GB table on a lightly loaded db and
> it run in 163 sec. (Dual opteron 2.6GHz, 6GB RAM, 6 x 74GB 15k disks in
> RAID10, reiserfs). A little less than 100MB sec.
This confirms our findings
On Sun, 2005-10-02 at 21:38 +0200, Martijn van Oosterhout wrote:
> Ok, I tried two optimisations:
>
> 2. By specifying: -Winline -finline-limit-1500 (only on tuplesort.c).
> This causes inlineApplySortFunction() to be inlined, like the code
> obviously expects it to be.
>
> default build (baseli
On E, 2005-10-03 at 14:16 -0700, Josh Berkus wrote:
> Jeff,
>
> > > Nope, LOTS of testing, at OSDL, GreenPlum and Sun. For comparison, A
> > > Big-Name Proprietary Database doesn't get much more than that either.
> >
> > I find this claim very suspicious. I get single-threaded reads in
> > exce
On Mon, 2005-10-03 at 14:16 -0700, Josh Berkus wrote:
> Jeff,
>
> > > Nope, LOTS of testing, at OSDL, GreenPlum and Sun. For comparison, A
> > > Big-Name Proprietary Database doesn't get much more than that either.
> >
> > I find this claim very suspicious. I get single-threaded reads in
> > ex
Jeff, Josh,
On 10/3/05 2:16 PM, "Josh Berkus" wrote:
> Jeff,
>
>>> Nope, LOTS of testing, at OSDL, GreenPlum and Sun. For comparison, A
>>> Big-Name Proprietary Database doesn't get much more than that either.
>>
>> I find this claim very suspicious. I get single-threaded reads in
>> excess
ASTR of your physical IO subsystem, but the concept is valid for _any_
physical IO subsystem.
-Original Message-
From: "Jeffrey W. Baker" <[EMAIL PROTECTED]>
Sent: Oct 3, 2005 4:42 PM
To: josh@agliodbs.com
Cc:
Subject: Re: [HACKERS] [PERFORM] A Better External Sort?
On
Jeff,
> > Nope, LOTS of testing, at OSDL, GreenPlum and Sun. For comparison, A
> > Big-Name Proprietary Database doesn't get much more than that either.
>
> I find this claim very suspicious. I get single-threaded reads in
> excess of 1GB/sec with XFS and > 250MB/sec with ext3.
Database reads?
On Mon, 2005-10-03 at 13:34 -0700, Josh Berkus wrote:
> Michael,
>
> > >Realistically, you can't do better than about 25MB/s on a
> > > single-threaded I/O on current Linux machines,
> >
> > What on earth gives you that idea? Did you drop a zero?
>
> Nope, LOTS of testing, at OSDL, GreenPlum and
Tom,
> Raising work_mem to a gig should result in about five runs, needing only
> one pass, which is really going to be as good as it gets. If you could
> not see any difference then I see little hope for the idea that reducing
> the number of merge passes will help.
Right. It *should have*, bu
Michael,
> >Realistically, you can't do better than about 25MB/s on a
> > single-threaded I/O on current Linux machines,
>
> What on earth gives you that idea? Did you drop a zero?
Nope, LOTS of testing, at OSDL, GreenPlum and Sun. For comparison, A
Big-Name Proprietary Database doesn't get mu
Ok, I tried two optimisations:
1. By creating a special version of comparetup_index for single key
integer indexes. Create an index_get_attr with byval and len args. By
using fetch_att and specifying the values at compile time, gcc
optimises the whole call to about 12 instructions of assembly rath
On Sat, Oct 01, 2005 at 11:26:07PM -0400, Tom Lane wrote:
> Martijn van Oosterhout writes:
> > Anyway, to bring some real info I just profiled PostgreSQL 8.1beta
> > doing an index create on a 2960296 row table (3 columns, table size
> > 317MB).
>
> 3 columns in the index you mean? What were the
Martijn van Oosterhout writes:
> Anyway, to bring some real info I just profiled PostgreSQL 8.1beta
> doing an index create on a 2960296 row table (3 columns, table size
> 317MB).
3 columns in the index you mean? What were the column datatypes?
Any null values?
> The number 1 bottleneck with 41
[removed -performance, not subscribed]
On Sat, Oct 01, 2005 at 01:42:32PM -0400, Ron Peacetree wrote:
> You have not said anything about what HW, OS version, and pg version
> used here, but even at that can't you see that something Smells Wrong?
Somewhat old machine running 7.3 on Linux 2.4. Not
ut
Sent: Oct 1, 2005 12:19 PM
Subject: Re: [HACKERS] [PERFORM] A Better External Sort?
On Sat, Oct 01, 2005 at 10:22:40AM -0400, Ron Peacetree wrote:
> Assuming we get the abyssmal physical IO performance fixed...
> (because until we do, _nothing_ is going to help us as much)
I'm still
so
that I can.)
Ron
-Original Message-
From: Andrew Dunstan <[EMAIL PROTECTED]>
Sent: Oct 1, 2005 11:19 AM
To: Ron Peacetree <[EMAIL PROTECTED]>
Subject: Re: [HACKERS] [PERFORM] A Better External Sort?
Ron Peacetree wrote:
>The good news is all this means it's easy
On Sat, Oct 01, 2005 at 10:22:40AM -0400, Ron Peacetree wrote:
> Assuming we get the abyssmal physical IO performance fixed...
> (because until we do, _nothing_ is going to help us as much)
I'm still not convinced this is the major problem. For example, in my
totally unscientific tests on an oldis
Tom Lane <[EMAIL PROTECTED]> writes:
> "Jeffrey W. Baker" <[EMAIL PROTECTED]> writes:
> > I think the largest speedup will be to dump the multiphase merge and
> > merge all tapes in one pass, no matter how large M. Currently M is
> > capped at 6, so a sort of 60GB with 1GB sort memory needs 13 p
Josh Berkus writes:
> The biggest single area where I see PostgreSQL external sort sucking is
> on index creation on large tables. For example, for free version of
> TPCH, it takes only 1.5 hours to load a 60GB Lineitem table on OSDL's
> hardware, but over 3 hours to create each index on th
Ron Peacetree wrote:
The good news is all this means it's easy to demonstrate that we can
improve the performance of our sorting functionality.
Assuming we get the abyssmal physical IO performance fixed...
(because until we do, _nothing_ is going to help us as much)
I for one would be p
rmance fixed...
(because until we do, _nothing_ is going to help us as much)
Ron
-Original Message-
From: Tom Lane <[EMAIL PROTECTED]>
Sent: Oct 1, 2005 2:01 AM
Subject: Re: [HACKERS] [PERFORM] A Better External Sort?
"Jeffrey W. Baker" <[EMAIL PROTECTED]> write
On R, 2005-09-30 at 13:38 -0700, Luke Lonergan wrote:
>
> Bulk loading speed is irrelevant here - that is dominated by parsing, which
> we have covered copiously (har har) previously and have sped up by 500%,
> which still makes Postgres < 1/2 the loading speed of MySQL.
Is this < 1/2 of MySQL w
On Fri, Sep 30, 2005 at 01:41:22PM -0700, Josh Berkus wrote:
Realistically, you can't do better than about 25MB/s on a single-threaded
I/O on current Linux machines,
What on earth gives you that idea? Did you drop a zero?
Mike Stone
---(end of broadcast)---
On Sat, 2005-10-01 at 02:01 -0400, Tom Lane wrote:
> "Jeffrey W. Baker" <[EMAIL PROTECTED]> writes:
> > I think the largest speedup will be to dump the multiphase merge and
> > merge all tapes in one pass, no matter how large M. Currently M is
> > capped at 6, so a sort of 60GB with 1GB sort memor
On Fri, 2005-09-30 at 13:41 -0700, Josh Berkus wrote:
> Yeah, that's what I thought too. But try sorting an 10GB table, and
> you'll see: disk I/O is practically idle, while CPU averages 90%+. We're
> CPU-bound, because sort is being really inefficient about something. I
> just don't know wh
rmance@postgresql.org
> Subject: Re: [HACKERS] [PERFORM] A Better External Sort?
>
> "Jeffrey W. Baker" <[EMAIL PROTECTED]> writes:
> > I think the largest speedup will be to dump the multiphase merge and
> > merge all tapes in one pass, no matter how large M.
"Jeffrey W. Baker" <[EMAIL PROTECTED]> writes:
> I think the largest speedup will be to dump the multiphase merge and
> merge all tapes in one pass, no matter how large M. Currently M is
> capped at 6, so a sort of 60GB with 1GB sort memory needs 13 passes over
> the tape. It could be done in a s
g; pgsql-
> [EMAIL PROTECTED]
> Subject: Re: [HACKERS] [PERFORM] A Better External Sort?
>
> On 9/28/05, Ron Peacetree <[EMAIL PROTECTED]> wrote:
> > 2= We use my method to sort two different tables. We now have these
> > very efficient representations of a specific ordering
On 9/28/05, Ron Peacetree <[EMAIL PROTECTED]> wrote:
> 2= We use my method to sort two different tables. We now have these
> very efficient representations of a specific ordering on these tables. A
> join operation can now be done using these Btrees rather than the
> original data tables that inv
On 9/30/05, Ron Peacetree <[EMAIL PROTECTED]> wrote:
> 4= I'm sure we are paying all sorts of nasty overhead for essentially
> emulating the pg "filesystem" inside another filesystem. That means
> ~2x as much overhead to access a particular piece of data.
>
> The simplest solution is for us to imp
sic physical IO, not just
> >IO during sort operations.
> >
> >As I said, the obvious candidates are inefficient physical layout
> >and/or flawed IO code.
> >
> >Until the basic IO issues are addressed, we could replace the
> >present sorting code with infi
less ROI.
Ron
-Original Message-
From: Josh Berkus
Sent: Sep 30, 2005 4:41 PM
To: Ron Peacetree <[EMAIL PROTECTED]>
Cc: pgsql-hackers@postgresql.org, pgsql-performance@postgresql.org
Subject: Re: [HACKERS] [PERFORM] A Better External Sort?
Ron,
> That 11MBps was your =bulk load
sage-
From: Josh Berkus
Sent: Sep 30, 2005 1:23 PM
To: Ron Peacetree <[EMAIL PROTECTED]>
Cc: pgsql-hackers@postgresql.org, pgsql-performance@postgresql.org
Subject: Re: [HACKERS] [PERFORM] A Better External Sort?
Ron,
Hmmm.
60GB/5400secs= 11MBps. That's ssllooww. So the first
pr
sql-hackers@postgresql.org, pgsql-performance@postgresql.org
Subject: Re: [HACKERS] [PERFORM] A Better External Sort?
Ron,
> Hmmm.
> 60GB/5400secs= 11MBps. That's ssllooww. So the first
> problem is evidently our physical layout and/or HD IO layer
> sucks.
Actually, it's much worse th
Bulk loading speed is irrelevant here - that is dominated by parsing,
which
we have covered copiously (har har) previously and have sped up by 500%,
which still makes Postgres < 1/2 the loading speed of MySQL.
Let's ask MySQL 4.0
LOAD DATA INFILE blah
0 errors, 666 warnings
SHOW
I see the following routines that seem to be related to sorting.
If I were to examine these routines to consider ways to improve it, what
routines should I key in on? I am guessing that tuplesort.c is the hub
of activity for database sorting.
Directory of U:\postgresql-snapshot\src\backend\acces
Ron,
On 9/30/05 1:20 PM, "Ron Peacetree" <[EMAIL PROTECTED]> wrote:
> That 11MBps was your =bulk load= speed. If just loading a table
> is this slow, then there are issues with basic physical IO, not just
> IO during sort operations.
Bulk loading speed is irrelevant here - that is dominated by
Ron,
> That 11MBps was your =bulk load= speed. If just loading a table
> is this slow, then there are issues with basic physical IO, not just
> IO during sort operations.
Oh, yeah. Well, that's separate from sort. See multiple posts on this
list from the GreenPlum team, the COPY patch for 8.1
> -Original Message-
> From: [EMAIL PROTECTED] [mailto:pgsql-hackers-
> [EMAIL PROTECTED] On Behalf Of PFC
> Sent: Thursday, September 29, 2005 9:10 AM
> To: [EMAIL PROTECTED]
> Cc: Pg Hackers; pgsql-performance@postgresql.org
> Subject: Re: [HACKERS] [PERFORM] A
Ron,
Hmmm.
60GB/5400secs= 11MBps. That's ssllooww. So the first
problem is evidently our physical layout and/or HD IO layer
sucks.
Actually, it's much worse than that, because the sort is only dealing
with one column. As I said, monitoring the iostat our top speed was
2.2mb/s.
--Josh
-
Just to add a little anarchy in your nice debate...
Who really needs all the results of a sort on your terabyte table ?
I guess not many people do a SELECT from such a table and want all the
results. So, this leaves :
- Really wanting all the results, to fetch using
>From: Josh Berkus
>Sent: Sep 29, 2005 12:54 PM
>Subject: Re: [HACKERS] [PERFORM] A Better External Sort?
>
>The biggest single area where I see PostgreSQL external
>sort sucking is on index creation on large tables. For
>example, for free version of TPCH, it takes only
>From: Zeugswetter Andreas DAZ SD <[EMAIL PROTECTED]>
>Sent: Sep 29, 2005 9:28 AM
>Subject: RE: [HACKERS] [PERFORM] A Better External Sort?
>
>>In my original example, a sequential scan of the 1TB of 2KB
>>or 4KB records, => 250M or 500M records of data, being sor
>From: Pailloncy Jean-Gerard <[EMAIL PROTECTED]>
>Sent: Sep 29, 2005 7:11 AM
>Subject: Re: [HACKERS] [PERFORM] A Better External Sort?
>>>Jeff Baker:
>>>Your main example seems to focus on a large table where a key
>>>column has constrained values. Th
Jeff,
On 9/29/05 10:44 AM, "Jeffrey W. Baker" <[EMAIL PROTECTED]> wrote:
> On Thu, 2005-09-29 at 10:06 -0700, Luke Lonergan wrote:
> Looking through tuplesort.c, I have a couple of initial ideas. Are we
> allowed to fork here? That would open up the possibility of using the
> CPU and the I/O in
@postgresql.org
> Cc: Jeffrey W. Baker
> Subject: Re: [HACKERS] [PERFORM] A Better External Sort?
>
> Jeff,
>
> > I would just run it under the profiler and see how many times
> > beginmerge() is called.
>
> Hmm, I'm not seeing it at all in the oprofile
Jeff,
> I would just run it under the profiler and see how many times
> beginmerge() is called.
Hmm, I'm not seeing it at all in the oprofile results on a 100million-row
sort.
--
--Josh
Josh Berkus
Aglio Database Solutions
San Francisco
---(end of broadcast)--
On Thu, 2005-09-29 at 11:03 -0700, Josh Berkus wrote:
> Jeff,
>
> > Josh, do you happen to know how many passes are needed in the multiphase
> > merge on your 60GB table?
>
> No, any idea how to test that?
I would just run it under the profiler and see how many times
beginmerge() is called.
-jw
Jeff,
> Josh, do you happen to know how many passes are needed in the multiphase
> merge on your 60GB table?
No, any idea how to test that?
> I think the largest speedup will be to dump the multiphase merge and
> merge all tapes in one pass, no matter how large M. Currently M is
> capped at 6,
On Thu, 2005-09-29 at 10:06 -0700, Luke Lonergan wrote:
> Josh,
>
> On 9/29/05 9:54 AM, "Josh Berkus" wrote:
>
> > Following an index creation, we see that 95% of the time required is the
> > external sort, which averages 2mb/s. This is with seperate drives for
> > the WAL, the pg_tmp, the tabl
On Thu, Sep 29, 2005 at 10:06:52AM -0700, Luke Lonergan wrote:
> Josh,
>
> On 9/29/05 9:54 AM, "Josh Berkus" wrote:
>
> > Following an index creation, we see that 95% of the time required
> > is the external sort, which averages 2mb/s. This is with seperate
> > drives for the WAL, the pg_tmp, t
Josh,
On 9/29/05 9:54 AM, "Josh Berkus" wrote:
> Following an index creation, we see that 95% of the time required is the
> external sort, which averages 2mb/s. This is with seperate drives for
> the WAL, the pg_tmp, the table and the index. I've confirmed that
> increasing work_mem beyond a s
Jeff, Ron,
First off, Jeff, please take it easy. We're discussing 8.2 features at
this point and there's no reason to get stressed out at Ron. You can
get plenty stressed out when 8.2 is near feature freeze. ;-)
Regarding use cases for better sorts:
The biggest single area where I see Po
> In my original example, a sequential scan of the 1TB of 2KB
> or 4KB records, => 250M or 500M records of data, being sorted
> on a binary value key will take ~1000x more time than reading
> in the ~1GB Btree I described that used a Key+RID (plus node
> pointers) representation of the data.
In the interest of efficiency and "not reinventing the wheel", does anyone know
where I can find C or C++ source code for a Btree variant with the following
properties:
A= Data elements (RIDs) are only stored in the leaves, Keys (actually
KeyPrefixes; see "D" below) and Node pointers are only stor
>From: "Jeffrey W. Baker" <[EMAIL PROTECTED]>
>Sent: Sep 27, 2005 1:26 PM
>To: Ron Peacetree <[EMAIL PROTECTED]>
>Subject: Re: [HACKERS] [PERFORM] A Better External Sort?
>
>On Tue, 2005-09-27 at 13:15 -0400, Ron Peacetree wrote:
>
>>That Btree can
>From: "Jeffrey W. Baker" <[EMAIL PROTECTED]>
>Sent: Sep 29, 2005 12:27 AM
>To: Ron Peacetree <[EMAIL PROTECTED]>
>Cc: pgsql-hackers@postgresql.org, pgsql-performance@postgresql.org
>Subject: Re: [HACKERS] [PERFORM] A Better External Sort?
>
>You are e
1 - 100 of 113 matches
Mail list logo