John,
One other thing I forgot to mention with solution 2. If you are going
to be adding a fair number of records to the table on an ongoing basis
you will want to drop indexes first and re-create them after the load is
complete. I have tried it both ways and dropping is faster overall.
--s
Sean -
I believe auto-commit was off (not at the box right now). I'll play with
the commit interval; I know commits are expensive operations.
Thanks for item 2. I was toying with the notion of pre-creating 10
bazes off-loading them and then seeing if the COPY would be any faster; you
saved m
Rob Nagler <[EMAIL PROTECTED]> writes:
> Greg Stark writes:
> > Note that pctfree/pctused are a big performance drain on the usual case. Try
> > setting them to 0/100 on a table that doesn't get updates (like a many-many
> > relation table) and see how much faster it is to insert and scan.
>
> R
Thanks Rod.
While I was waiting for my post to make it I went ahead and made the key an
int. It improved it a lot, but was still pretty slow.
This is weird:
I was testing in a query window thus:
UPDATE baz SET customer_id = '1234' WHERE ( SELECT baz_number FROM baz WHERE
customer_id IS NULL LI
Pardon this for looking somewhat "weird" but it seems I'm not getting all
the messages to the list.. I've noticed the past few days a lot of them are
coming out of order as well..
So, this was copy/pasted from the web archive of the list..
Vivek Khera wrote:
> "AL" == Allen Landsidel write
Rob Nagler <[EMAIL PROTECTED]> writes:
> Mario Weilguni writes:
> > of course both approaches have advantages, it simply depends on the usage
> > pattern. A case where oracle really rules over postgresql are m<-->n
> > connection tables where each record consist of two foreign keys, the
> > ov
Greg Stark writes:
> Note that pctfree/pctused are a big performance drain on the usual case. Try
> setting them to 0/100 on a table that doesn't get updates (like a many-many
> relation table) and see how much faster it is to insert and scan.
Right. You can optimize each table independently. Th
Vivek Khera writes:
> Also, how close are you to the capacity of your disk bandwidth? I
> don't see that in your numbers. I know in freebsd I can run "systat
> -vmstat" and it gives me a percentage of utilization that lets me know
> when I'm near the capacity.
The vacuum totally consumes the sys
Mario Weilguni writes:
> of course both approaches have advantages, it simply depends on the usage
> pattern. A case where oracle really rules over postgresql are m<-->n
> connection tables where each record consist of two foreign keys, the
> overwrite approach is a big win here.
That's usually
Stephen writes:
> I ran into the same problem with VACUUM on my Linux box. If you are running
> Linux, take a look at "elvtune" or read this post:
The default values were -r 64 -w 8192. The article said this was
"optimal". I just futzed with different values anywere from -w 128 -r
128 to -r 16 -
Christopher Browne kirjutas R, 24.10.2003 kell 22:10:
> That might be something of an improvement, but it oughtn't be
> cripplingly different to use a text field rather than an integer.
I suspect his slowness comes from not running analyze when it would be
time to start using indexes for fk check
Vivek Khera wrote:
> > "sm" == scott marlowe writes:
>
>
> sm> Note that Tom has mentioned problems with possible deadlocks when nicing
> sm> individual backends before, so proceed with caution here.
>
> I can see possible starvation, but if scheduling changes cause
> deadlocks, then there
Just thought i'd mention that on top of optimising postgres as much as
possible, don't forget how much something like memcached can do for you
http://www.danga.com/memcached/
we use it on www.last.fm - most pages only take one or two database hits,
compared with 30 to 40 when memcache is turned
Just an interesting comparison:
I don't have the specifics, but a Dell 2 x 2.4GHZ/512KB L3 / 2GB RAM machine timed a
query much faster than an older Sun E4000 with 6 x ~300MHZ CPUs / 2GB RAM. One on RH(8
or 9, don't remember) and one on Solaris 9.
-anjan
-Original Message-
From: W
[EMAIL PROTECTED] ("John Pagakis") writes:
> Greetings.
>
> I have a table that will require 100,000 rows initially.
>
> Assume the following (some of the field names have been changed for
> confidentiality reasons):
>
> CREATE TABLE baz (
> baz_number CHAR(15) NOT NULL,
> customer_id CHAR(
John,
Are you treating each insertion as a separate transaction? If so the
performance will suffer. I am doing the same thing in building a data
warehouse using PG. I have to load millions of records each night. I
do two different things:
1) If I need to keep the insertions inside the java
On Thu, 2003-10-23 at 08:21, John Pagakis wrote:
> Greetings.
>
> I have a table that will require 100,000 rows initially.
>
> Assume the following (some of the field names have been changed for
> confidentiality reasons):
>
> CREATE TABLE baz (
> baz_number CHAR(15) NOT NULL,
> customer
I ran into the same problem with VACUUM on my Linux box. If you are running
Linux, take a look at "elvtune" or read this post:
http://groups.google.com/groups?q=stephen+vacuum+linux&hl=en&lr=&ie=UTF-8&se
lm=gRdjb.7484%241o2.77%40nntp-post.primus.ca&rnum=3
Regards, Stephen
"Rob Nagler" <[EMAIL P
I have never worked with a XEON CPU before. Does anyone know how it performs
running PostgreSQL 7.3.4 / 7.4 on RedHat 9 ? Is it faster than a Pentium 4?
I believe the main difference is cache memory, right? Aside from cache mem,
it's basically a Pentium 4, or am I wrong?
Well, see the problem is of
Anjan Dave wrote:
Shared_buffers (25% of RAM / 8KB)) = 8589934592 * .25 / 8192 = 262144
250,000 is probably the max you can use due to the 2GB process limit
unless you recompile the Linux Kernel to use 3GB process/1GB kernel.
Yes, I've got 8GB also and I started at 262144 and kept working my way
I have a reporting system that does regular queries on a table with a
multipart index. I am running version 7.3.4. Here is the table
definition:
Table "public.ds_rec_fld"
Column | Type | Modifiers
---+-+---
ds
Greetings.
I have a table that will require 100,000 rows initially.
Assume the following (some of the field names have been changed for
confidentiality reasons):
CREATE TABLE baz (
baz_number CHAR(15) NOT NULL,
customer_id CHAR(39),
foobar_id INTEGER,
is_cancelled BOOL DEFAULT fa
So what is the ceiling on 32-bit processors for RAM? Most of the 64-bit
vendors are pushing Athalon64 and G5 as "breaking the 4GB barrier", and even
I can do the math on 2^32. All these 64-bit vendors, then, are talking
about the limit on ram *per application* and not per machine?
64-bit CPU o
> "sm" == scott marlowe writes:
sm> Note that Tom has mentioned problems with possible deadlocks when nicing
sm> individual backends before, so proceed with caution here.
I can see possible starvation, but if scheduling changes cause
deadlocks, then there's something wrong with the design.
On Fri, 24 Oct 2003, Vivek Khera wrote:
> > "AL" == Allen Landsidel <[EMAIL PROTECTED]> writes:
>
> AL> Well I had the vacuums running every 15 minutes for a while.. via a
> AL> simple cron script I wrote just to make sure no more than one vacuum
> AL> ran at once, and to 'nice' the job.. but
> "AL" == Allen Landsidel <[EMAIL PROTECTED]> writes:
AL> Well I had the vacuums running every 15 minutes for a while.. via a
AL> simple cron script I wrote just to make sure no more than one vacuum
AL> ran at once, and to 'nice' the job.. but performance on the db does
"nice"-ing the client
> "AL" == Allen Landsidel <[EMAIL PROTECTED]> writes:
AL> maxusers 256
>>
>> let the system autoconfigure maxusers...
AL> Are you sure about this? I have always understood that explicitly
AL> setting this value was the best thing to do if you knew the maximum
Yes, recent freebsd kernels au
> "JB" == Josh Berkus <[EMAIL PROTECTED]> writes:
JB> Vivek,
>> Assuming your OS will use that much RAM for the cache... the whole
>> world's not Linux :-)
JB> It's not? Darn!
:-)
JB> Actually, what OS's can't use all idle ram for kernel cache? I
JB> should note that in my performance do
Rob Nagler <[EMAIL PROTECTED]> writes:
> Incorrect. If the tuples smaller, Oracle does the right thing. If
> there's enough space in the page, it shifts the tuples to make room.
> That's what pctfree, pctused and pctincrease allow you to control.
> It's all in memory so its fast, and I don't thi
Vivek,
> Assuming your OS will use that much RAM for the cache... the whole
> world's not Linux :-)
It's not? Darn!
Actually, what OS's can't use all idle ram for kernel cache? I should note
that in my performance docs
--
Josh Berkus
Aglio Database Solutions
San Francisco
--
"Medora Schauer" <[EMAIL PROTECTED]> writes:
> Merge Join (cost=0.00..287726.10 rows=100221 width=58) (actual time=61.60..5975.63
> rows=100425 loops=1)
>Merge Cond: (("outer".shot_line_num = "inner".shot_line_num) AND
> ("outer".shotpoint = "inner".shotpoint))
>-> Index Scan using hs
On Friday 24 October 2003 02:21, you wrote:
> Thank You Sir,
> I'm very Greatfull with your answer ,
> 1. I don't use Cywig, my postgres is 7.4 , and I only
> runs it from command prompt on windows(postmaster);
Do you mean the native port from here:
http://momjian.postgresql.org/main/writings/
At 17:14 10/23/2003, Vivek Khera wrote:
> "AL" == Allen Landsidel <[EMAIL PROTECTED]> writes:
AL> I recently built a rather powerful machine to be used in a heavily
AL> accessed database.. the machine is a dual AthlonMP 2800+, 2GB of
AL> PC2100 ECC, and a 4x18GB RAID-0 using 15k rpm fujitsu MA
33 matches
Mail list logo