pgsql performance gurus,
I truly appreciate the suggestions provided.
I have tried each one separately to determine the
best fit. I have included results for each suggestion.
I have also included my entire postgresql.conf file so
you can see our base configuration.
Each result is based on an in-
John Mendenhall <[EMAIL PROTECTED]> writes:
> Merge Join (cost=4272.84..4520.82 rows=1230 width=21) (actual
> time=3998.771..4603.739 rows=699 loops=1)
>Merge Cond: ("outer".contact_id = "inner".id)
>-> Index Scan using lead_requests_contact_id_idx on lead_requests lr
> (cost=0.00..74
Thank you very much in advance for any pointers you can
provide. And, if this is the wrong forum for this question,
please let me know and I'll ask it elsewhere.
I think you may want to increase your statistics_target plus make sure
you are running analyze. explain anaylze would do.
Sincer
pgsql performance gurus,
We ported an application from oracle to postgresql.
We are experiencing an approximately 50% performance
hit. I am in the process of isolating the problem.
I have searched the internet (google) and tried various
things. Only one thing seems to work. I am trying to
find
Jean-Max Reymond <[EMAIL PROTECTED]> writes:
> so the request run in 26.646 ms on the Sun and 0.469ms on my laptop :-(
> the database are the same, vacuumed and I think the Postgres (8.0.3)
> are well configured.
Are you sure they're both vacuumed? The Sun machine's behavior seems
consistent wit
Jean-Max,
> I have two computers, one laptop (1.5 GHz, 512 Mb RAM, 1 disk 4200)
> and one big Sun (8Gb RAM, 2 disks SCSI).
Did you run each query several times? It looks like the index is cached
on one server and not on the other.
--
--Josh
Josh Berkus
Aglio Database Solutions
San Francisco
2005/6/30, Jean-Max Reymond <[EMAIL PROTECTED]>:
> so the request run in 26.646 ms on the Sun and 0.469ms on my laptop :-(
> the database are the same, vacuumed and I think the Postgres (8.0.3)
> are well configured.
> The Sun has two disks and use the TABLESPACE to have index on one disk
> and dat
hi,
I have two computers, one laptop (1.5 GHz, 512 Mb RAM, 1 disk 4200)
and one big Sun (8Gb RAM, 2 disks SCSI).
On my laptop, I have this EXPLAIN ANALYZE
Sort (cost=7.56..7.56 rows=1 width=28) (actual time=0.187..0.187
rows=0 loops=1)
Sort Key: evolution, indx
-> Index Scan using index
I was hesitant to jump in on this because I am new to PostgreSQL and
haven't seen this problem with _it_, but I have seen this with the
Sybase database products. You can configure Sybase to disable the Nagle
algorithm. If you don't, any query which returns rows too big to fit in
their network buf
> Milan Sekanina <[EMAIL PROTECTED]> writes:
> > We are running an application that uses psqlodbc driver on Windows
XP to
> > connect to a server and for some reason the download of data from
the
> > server is very slow. We have created a very simple test application
that
> > inserts a larger amoun
Milan Sekanina <[EMAIL PROTECTED]> writes:
> We are running an application that uses psqlodbc driver on Windows XP to
> connect to a server and for some reason the download of data from the
> server is very slow. We have created a very simple test application that
> inserts a larger amount of da
Martin Lesser <[EMAIL PROTECTED]> writes:
> the time needed for a daily VACUUM on a table with about 28 mio records
> increases from day to day.
My guess is that the original timings were artificially low because the
indexes were in nearly perfect physical order, and as that condition
degrades ove
You should provide a bit more details on what happens if you want people to
help you.
Tipically you will be asked an explain analyze of your query.
As a first tip if your table contains much more than 30.000 rows you could
try to set up a partial index with
thru_date is null condition.
regard
We are running an application that uses psqlodbc driver on Windows XP to
connect to a server and for some reason the download of data from the
server is very slow. We have created a very simple test application that
inserts a larger amount of data into the database and uses a simple
"SELECT * f
Hi,
the time needed for a daily VACUUM on a table with about 28 mio records
increases from day to day. What's the best way to avoid this? A full
vacuum will probably take too much time, are there other ways to keep
vacuum performant?
The database was updated to postgres-8.0 on Jun 04 this year.
15 matches
Mail list logo