lel" approach to parallelism, not
thread-level. There are lost of historical reasons, but, that's just
hte way it is for now.
Chris
--
| Christopher Petrilli
| [EMAIL PROTECTED]
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
d this is a hard thing in
any database, even Oracle.
Chris
--
| Christopher Petrilli
| [EMAIL PROTECTED]
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
On 7/19/05, Christopher Petrilli <[EMAIL PROTECTED]> wrote:
> It looks like the CVS HEAD is definately "better," but not by a huge
> amount. The only difference is I wasn't run autovacuum in the
> background (default settings), but I don't think this e
On 7/19/05, Christopher Petrilli <[EMAIL PROTECTED]> wrote:
> On 7/19/05, Tom Lane <[EMAIL PROTECTED]> wrote:
> > Christopher Petrilli <[EMAIL PROTECTED]> writes:
> > > On 7/19/05, Tom Lane <[EMAIL PROTECTED]> wrote:
> > >> How *exactly* ar
On 7/19/05, Tom Lane <[EMAIL PROTECTED]> wrote:
> Christopher Petrilli <[EMAIL PROTECTED]> writes:
> > On 7/19/05, Tom Lane <[EMAIL PROTECTED]> wrote:
> >> How *exactly* are you invoking psql?
>
> > It is a subprocess of a Python process, driven usi
On 7/19/05, Tom Lane <[EMAIL PROTECTED]> wrote:
> Christopher Petrilli <[EMAIL PROTECTED]> writes:
> >> Are you sure the backend is reading directly from the file, and not
> >> through psql? (\copy, or COPY FROM STDIN, would go through psql.)
>
> > The e
On 7/19/05, Tom Lane <[EMAIL PROTECTED]> wrote:
> Christopher Petrilli <[EMAIL PROTECTED]> writes:
> > As I'm doing this, I'm noticing something *VERY* disturbing to me:
> > postmaster backend: 20.3% CPU
> > psql frontend: 61.2% CPU
>
> > WTF
As I'm doing this, I'm noticing something *VERY* disturbing to me:
postmaster backend: 20.3% CPU
psql frontend: 61.2% CPU
WTF? The only thing going through the front end is the COPY command,
and it's sent to the backend to read from a file?
Chris
--
| Christopher Petrilli
| [
reindex
> defrag your files on disk (stopping postgres and copying the database
> from your disk to anotherone and back will do)
> or even dump'n'reload the whole database
>
> I think useful information can be extracted that way. If one of these
&g
On 7/19/05, Tom Lane <[EMAIL PROTECTED]> wrote:
> Christopher Petrilli <[EMAIL PROTECTED]> writes:
> > Not sure... my benchmark is designed to represent what the database
> > will do under "typical" circumstances, and unfortunately these are
> > typic
anything (3-4x slower), but then it goes back to following the trend
line. The data in the chart for v8.0.3 includes running pg_autovacuum
(5 minutes).
Chris
--
| Christopher Petrilli
| [EMAIL PROTECTED]
---(end of broadcast)---
TIP 4: Have you sea
s 500 rows. Note that fsync
is turned off here. Maybe it'd be more stable with it turned on?
Chris
--
| Christopher Petrilli
| [EMAIL PROTECTED]
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
On 7/18/05, Tom Lane <[EMAIL PROTECTED]> wrote:
> Christopher Petrilli <[EMAIL PROTECTED]> writes:
> > On 7/18/05, Tom Lane <[EMAIL PROTECTED]> wrote:
> >> I have no idea at all what's causing the sudden falloff in performance
> >> after about
On 7/18/05, Vivek Khera <[EMAIL PROTECTED]> wrote:
>
> On Jul 17, 2005, at 1:08 PM, Christopher Petrilli wrote:
>
> > Normally, checkpoint_segments can help absorb some of that, but my
> > experience is that if I crank the number up, it simply delays the
> > imp
On 7/18/05, Tom Lane <[EMAIL PROTECTED]> wrote:
> Christopher Petrilli <[EMAIL PROTECTED]> writes:
> > http://blog.amber.org/diagrams/comparison_mysql_pgsql.png
>
> > Notice the VERY steep drop off.
>
> Hmm. Whatever that is, it's not checkpoint's f
xperience is that if I crank the number up, it simply delays the
impact, and when it occurs, it takes a VERY long time (minutes) to
clear.
Thoughts?
Chris
--
| Christopher Petrilli
| [EMAIL PROTECTED]
---(end of broadcast)---
TIP 1: if posting/re
t has
> relatively poor concurrency, and creating a hash index is significantly
> slower than creating a b+-tree index.
This being the case, is there ever ANY reason for someone to use it?
If not, then shouldn't we consider deprecating it and eventually
removing it. This would reduce com
COPY FROM STDIN. This allows the backend process to
directly read the file, rather than shoving it over a pipe (thereby
potentially hitting the CPU multiple times). My experience is that
this is anywhere from 5-10x faster than INSERT statements on the
whole, and sometimes 200x.
Chris
--
ut I think if you want to use SQL*Loader on
Oracle, you have to do the same thing. I know a C++ app that I use
that runs SQL*Loader about once per second to deal with a HUGE volume
(10K/sec). In fact, moving the load files onto ramdisk has helped a
lot.
Chris
--
| Christopher Petrilli
| [EM
#x27;re likely seeing is the parse
overhead of the setup. When you use COPY (as opposed to \copy), the
postmaster is reading the file directory. There's just a lot less
overhead.
Can you write the files on disk and then kick off the psql process to run them?
Chris
--
| Christopher Pe
using
> oracle.
Just as on Oracle you would use SQL*Loader for this application, you
should use the COPY syntax for PostgreSQL. You will find it a lot
faster. I have used it by building the input files and executing
'psql' with a COPY command, and also by using it with a subprocess,
.
One interesting thing... PostgreSQL starts out a good bit faster, but
looses in the end.
Chris
--
| Christopher Petrilli
| [EMAIL PROTECTED]
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
ated fix of course is to increase shared_buffers.
>
> Splitting your tables at 4M, not 10M would work even better.
Unfortunately, given we are talking about billions of rows
potentially, I'm concerned about that many tables when it comes to
query time. I assume this will kick in the ge
On Apr 5, 2005 12:16 AM, Christopher Petrilli <[EMAIL PROTECTED]> wrote:
> Looking at preliminary results from running with shared_buffers at
> 16000, it seems this may be correct. Performance was flatter for a
> BIT longer, but slammed right into the wall and started hitting th
ot be solvable. Does anyone else have
much experience with this sort of sustained COPY?
Chris
--
| Christopher Petrilli
| [EMAIL PROTECTED]
---(end of broadcast)---
TIP 6: Have you searched our list archives?
http://archives.postgresql.org
.00 0.00 0.00 3616 0
sda 23.1568.09 748.89 246884021 2715312654
sdb 19.0837.65 773.03 136515457 2802814036
The first 3 columns have been identical (or nearly so) the whole time,
which tells me the system is peg
On Apr 4, 2005 10:36 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
> Christopher Petrilli <[EMAIL PROTECTED]> writes:
> > On Apr 4, 2005 12:23 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
> >> do a test run with *no* indexes on the table, just to see if it behaves
&g
here's the system configuration:
* AMD64/3000
* 2GB RAM (was 1GB, has made no difference)
* 1 x 120GB SATA drive (w/WAL), 7200RPM Seagate
* 1 x 160GB SATA drive (main), 7200RPM Seagate
Chris
--
| Christopher Petrilli
| [EMAIL PROTECTED]
---(end of broadcast)-
Nope, I'm running a second run without the auxilary indices. I only
have the primary key index. So far, a quick scan with the eye says
that it's behaving "better", but beginning to have issues again. I'll
post results as soon as they are done.
to building the loadfile.
Note that I'm specifically including the time it takes to get the
prompt back in the timing (but it does slip 1 loop, which isn't
relevent).
Chris
--
| Christopher Petrilli
| [EMAIL PROTECTED]
---(end of broadcast)--
--- 1 pgsql pgsql 8192 Apr 4 12:26 26488331
-rw--- 1 pgsql pgsql 8192 Apr 4 12:26 26488332
-rw--- 1 pgsql pgsql 0 Apr 4 12:26 26488334
-rw--- 1 pgsql pgsql 0 Apr 4 12:26 26488336
-rw--- 1 pgsql pgsql 8192 Apr 4 12:26 26488338
-rw-
On Apr 4, 2005 3:46 PM, Simon Riggs <[EMAIL PROTECTED]> wrote:
> On Mon, 2005-04-04 at 09:48 -0400, Christopher Petrilli wrote:
> > The point, in the rough middle, is where the program begins inserting
> > into a new table (inherited). The X axis is the "total&qu
On Apr 4, 2005 12:23 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
> Christopher Petrilli <[EMAIL PROTECTED]> writes:
> > On Apr 4, 2005 11:52 AM, Tom Lane <[EMAIL PROTECTED]> wrote:
> >> Could we see the *exact* SQL definitions of the table and indexes?
>
>
On Apr 4, 2005 11:52 AM, Tom Lane <[EMAIL PROTECTED]> wrote:
> Christopher Petrilli <[EMAIL PROTECTED]> writes:
> > The table has:
> > * 21 columns (nothing too strange)
> > * No OIDS
> > * 5 indexes, including the primary key on a string
>
On Apr 1, 2005 3:59 PM, Christopher Petrilli <[EMAIL PROTECTED]> wrote:
> On Apr 1, 2005 3:53 PM, Joshua D. Drake <[EMAIL PROTECTED]> wrote:
> >
> > > What seems to happen is it slams into a "wall" of some sort, the
> > > system goes into
the alot advice for 'loading data' doesn't apply when
you have a constant stream of load, rather than just sporadic. Any
advice is more than appreciated.
Chris
--
| Christopher Petrilli
| [EMAIL PROTECTED]
---(end of broadcast)---
TIP 3:
On Apr 1, 2005 3:42 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
> Christopher Petrilli <[EMAIL PROTECTED]> writes:
> > I can start at about 4,000 rows/second, but at about 1M rows, it
> > plummets, and by 4M it's taking 6-15 seconds to insert 1000 rows.
> > That
MySQL
implementation (which uses no transactions) runs around 800-1000
rows/second systained.
Just a point of reference. I'm trying to collect some data so that I
can provide some charts of the degredation, hoping to find the point
where it dies and thereby find the point where i
38 matches
Mail list logo