On Jul 19, 2005, at 3:01 PM, Tom Lane wrote:
You could possibly get some improvement if you can re-use prepared
plans
for the queries; but this will require some fooling with the client
code
(I'm not sure if DBD::Pg even has support for it at all).
DBD::Pg 1.40+ by default uses server-si
Title: Re: [PERFORM] Looking for tips
Hi,
I have a similar application,
but instead of adding new items to the db once at time,
I retrieve new IDs from a sequence (actually only every 10'000
times) and write a csv file from perl.
When finished, I load all new record in one run with
Sorry for the lack of specifics...
We have a file generated as a list of events, one per line. Suppose
lines 1,2,3,5,7,11,etc were related, then the last one would specify
that it's the last event. Gradually this gets assembled by a perl
script and when the last event is encountered, it gets inser
You could have a program pre-parse your log and put it in a format
understandable by COPY, then load it in a temporary table and write a part
of your application simply as a plpgsql function, reading from this table
and doing queries (or a plperl function)...
So... (bear with me here..
On Tue, 2005-07-19 at 17:04 -0400, Oliver Crosby wrote:
> > since triggers work with COPY, you could probably write a trigger that
> > looks for this condition and does the ID processsing you need; you could
> > thereby enjoy the enormous speed gain resulting from COPY and maintain
> > your data co
> since triggers work with COPY, you could probably write a trigger that
> looks for this condition and does the ID processsing you need; you could
> thereby enjoy the enormous speed gain resulting from COPY and maintain
> your data continuity.
So... (bear with me here.. trying to make sense of th
On Tue, 2005-07-19 at 16:28 -0400, Oliver Crosby wrote:
> > If it is possible try:
> > 1) wrapping many inserts into one transaction
> > (BEGIN;INSERT;INSERT;...INSERT;COMMIT;). As PostgreSQL will need to
> > handle less transactions per second (each your insert is a transaction), it
> > may work
On 07/19/2005-02:41PM, Oliver Crosby wrote:
>
> No queries in particular appear to be a problem.
That could mean they are ALL a problem. Let see some EXPLAIN ANAYZE
results just to rule it out.
> At the moment it's just one user,
With 1 user PostgreSQL will probobaly never beat MySQL
but wit
> If it is possible try:
> 1) wrapping many inserts into one transaction
> (BEGIN;INSERT;INSERT;...INSERT;COMMIT;). As PostgreSQL will need to
> handle less transactions per second (each your insert is a transaction), it
> may work faster.
Aye, that's what I have it doing right now. The transacti
PS: Where can I find benchmarks comparing PHP vs Perl vs Python in
terms of speed of executing prepared statements?
I'm afraid you'll have to do these yourself !
And, I don't think the Python drivers support real prepared statements
(the speed of psycopy is really good though).
I
On 7/19/05, Oliver Crosby <[EMAIL PROTECTED]> wrote:
> > We had low resource utilization and poor throughput on inserts of
> > thousands of rows within a single database transaction. There were a
> > lot of configuration parameters we changed, but the one which helped the
> > most was wal_buffers
> We had low resource utilization and poor throughput on inserts of
> thousands of rows within a single database transaction. There were a
> lot of configuration parameters we changed, but the one which helped the
> most was wal_buffers -- we wound up setting it to 1000. This may be
> higher than
On Jul 19, 2005, at 3:36 PM, Steinar H. Gunderson wrote:
On Tue, Jul 19, 2005 at 03:16:31PM -0400, Tom Lane wrote:
Ah, but are they really prepared, or is DBD::Pg faking it by
inserting
parameter values into the query text and then sending the assembled
string as a fresh query?
They are
On Tue, Jul 19, 2005 at 03:16:31PM -0400, Tom Lane wrote:
> Ah, but are they really prepared, or is DBD::Pg faking it by inserting
> parameter values into the query text and then sending the assembled
> string as a fresh query?
They are really prepared.
/* Steinar */
--
Homepage: http://www.se
> If you're running only a single query at a time (no multiple clients),
> then this is pretty much the definition of a MySQL-friendly workload;
> I'd have to say we are doing really well if we are only 50% slower.
> Postgres doesn't have any performance advantages until you get into
> complex quer
Oliver Crosby <[EMAIL PROTECTED]> writes:
>> You could possibly get some improvement if you can re-use prepared plans
>> for the queries; but this will require some fooling with the client code
>> (I'm not sure if DBD::Pg even has support for it at all).
> Aye. We have prepared statements.
Ah, bu
I can't say wether MySQL is faster for very small queries (like
SELECT'ing one row based on an indexed field).
That's why I was asking you about the language...
I assume you're using a persistent connection.
For simple queries like this, PG 8.x seemed to be a lot faster than
On Tue, Jul 19, 2005 at 03:01:00PM -0400, Tom Lane wrote:
> You could possibly get some improvement if you can re-use prepared plans
> for the queries; but this will require some fooling with the client code
> (I'm not sure if DBD::Pg even has support for it at all).
Newer versions has, when compi
Oliver Crosby <[EMAIL PROTECTED]> writes:
> The queries are all simple insert or select statements on single tables.
> Eg. select x from table where y=?; or insert into table (a, b, c)
> values (?, ?, ?);
> In the case of selects where it's a large table, there's an index on
> the column being sear
Hi Oliver,
We had low resource utilization and poor throughput on inserts of
thousands of rows within a single database transaction. There were a
lot of configuration parameters we changed, but the one which helped the
most was wal_buffers -- we wound up setting it to 1000. This may be
higher th
What programming language are these scripts written in ?
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
> What programming language are these scripts written in ?
perl. using the DBD:Pg interface instead of command-lining it through psql
---(end of broadcast)---
TIP 6: explain analyze is your friend
> Identify what the problem is first of all. Some things to consider:
> - Are there particular queries giving you trouble?
> - Is your load mostly reads or mostly writes?
> - Do you have one user or 100?
> - Are you block-loading data efficiently where necessary?
> - Have you indexed both side
I was hoping to start with tuning postgres to match the hardware, but
in any case..
The queries are all simple insert or select statements on single tables.
Eg. select x from table where y=?; or insert into table (a, b, c)
values (?, ?, ?);
In the case of selects where it's a large table, there's
Oliver Crosby wrote:
Hi,
I'm running Postgres 7.4.6 on a dedicated server with about 1.5gigs of ram.
Running scripts locally, it takes about 1.5x longer than mysql, and the load
on the server is only about 21%.
What scripts? What do they do?
Oh, and 7.4.8 is the latest release - worth upgradin
Oliver Crosby wrote:
Hi,
I'm running Postgres 7.4.6 on a dedicated server with about 1.5gigs of ram.
Running scripts locally, it takes about 1.5x longer than mysql, and the
load on the server is only about 21%.
I upped the sort_mem to 8192 (kB), and shared_buffers and
effective_cache_size to 6553
Oliver Crosby wrote:
Hi,
I'm running Postgres 7.4.6 on a dedicated server with about 1.5gigs of ram.
Running scripts locally, it takes about 1.5x longer than mysql, and the
load on the server is only about 21%.
What queries?
What is your structure?
Have you tried explain analyze?
How many rows
Hi,
I'm running Postgres 7.4.6 on a dedicated server with about 1.5gigs of ram.
Running scripts locally, it takes about 1.5x longer than mysql, and the load on the server is only about 21%.
I upped the sort_mem to 8192 (kB), and shared_buffers and effective_cache_size to 65536 (512MB), but neither
28 matches
Mail list logo