read-write with different memory profiles, so I
assume my only real option is throw more RAM at it. I don't have $$$
for another array/server for a master/slave right now. Or perhaps
tweaking my .conf file? Are newer PG versions more memory efficient?
Thanks,
Aaron
--
Aaron Turner
http
On Thu, Oct 7, 2010 at 12:02 PM, Stephen Frost sfr...@snowman.net wrote:
* Aaron Turner (synfina...@gmail.com) wrote:
Basically, each connection is taking about 100MB resident
Errr.. Given that your shared buffers are around 100M, I think you're
confusing what you see in top with reality
On Thu, Oct 7, 2010 at 2:47 PM, Greg Smith g...@2ndquadrant.com wrote:
Aaron Turner wrote:
Are newer PG versions more memory efficient?
Moving from PostgreSQL 8.1 to 8.3 or later should make everything you do
happen 2X to 3X faster, before even taking into account that you can tune
to be a perfectly good index. Is there a better way
construct this query?
Thanks,
Aaron
--
Aaron Turner
http://synfin.net/
http://tcpreplay.synfin.net/ - Pcap editing and replay tools for Unix Windows
Those who would give up essential Liberty, to purchase a little temporary
Safety, deserve
On Tue, Jun 16, 2009 at 2:37 PM, Alvaro
Herreraalvhe...@commandprompt.com wrote:
Aaron Turner escribió:
I'm trying to figure out how to optimize this query (yes, I ran
vacuum/analyze):
musecurity=# explain DELETE FROM muapp.pcap_store WHERE pcap_storeid
NOT IN (SELECT pcap_storeid FROM
On Tue, Jun 16, 2009 at 5:30 PM, Robert Haasrobertmh...@gmail.com wrote:
On Tue, Jun 16, 2009 at 7:39 PM, Aaron Turnersynfina...@gmail.com wrote:
On Tue, Jun 16, 2009 at 2:37 PM, Alvaro
Herreraalvhe...@commandprompt.com wrote:
Aaron Turner escribió:
I'm trying to figure out how to optimize
width=4)
I know the costs are just relative, but I assumed
cost=19229.08..29478.99 isn't 5 minutes of effort even on crappy
hardware. Honestly, not complaining, 5 minutes is acceptable for this
query (it's a one time thing) just surprised is all.
Thanks for the help!
--
Aaron Turner
http
width=4)
I know the costs are just relative, but I assumed
cost=19229.08..29478.99 isn't 5 minutes of effort even on crappy
hardware. Honestly, not complaining, 5 minutes is acceptable for this
query (it's a one time thing) just surprised is all.
Thanks for the help!
--
Aaron Turner
http
.
--
Aaron Turner
http://synfin.net/
http://tcpreplay.synfin.net/ - Pcap editing and replay tools for Unix Windows
Those who would give up essential Liberty, to purchase a little
temporary Safety,
deserve neither Liberty nor Safety.
-- Benjamin Franklin
--
Sent via pgsql-performance mailing list
, waiting for an ack to be received that could cause a
large slow down.
Do the ack's include any data? If so it's indicative of the PG
networking protocol overhead- probably not much you can do about that.
Without looking at a pcap myself, I'm not sure I can help out any more.
--
Aaron Turner
http
of the settings in the postgresql.conf actually
dropped performance significantly. Looks like I'm starving the disk
cache.
4) I'm going to assume going to a bytea helped some (width is 54 vs
66) but nothing really measurable
Thanks everyone for your help!
--
Aaron Turner
http://synfin.net
them is not always
practical, depends on your application; table will be exclusively locked
during the transaction due to drop index.
Yep. In my case it's not a huge problem right now, but I know it will
become a serious one sooner or later.
Thanks a lot Marc. Lots of useful info.
--
Aaron
On 2/12/06, Tom Lane [EMAIL PROTECTED] wrote:
Aaron Turner [EMAIL PROTECTED] writes:
Well before I go about re-architecting things, it would be good to
have a strong understanding of just what is going on. Obviously, the
unique index on the char(48) is the killer. What I don't know
On 2/11/06, Jim C. Nasby [EMAIL PROTECTED] wrote:
On Fri, Feb 10, 2006 at 09:24:39AM -0800, Aaron Turner wrote:
On 2/10/06, Matthew T. O'Connor matthew@zeut.net wrote:
Aaron Turner wrote:
Basically, I need some way to optimize PG so that I don't have to drop
that index every time
On 2/10/06, hubert depesz lubaczewski [EMAIL PROTECTED] wrote:
On 2/10/06, Aaron Turner [EMAIL PROTECTED] wrote:
So I'm trying to figure out how to optimize my PG install (8.0.3) to
get better performance without dropping one of my indexes.
Basically, I have a table of 5M records with 3
On 2/10/06, Matthew T. O'Connor matthew@zeut.net wrote:
Aaron Turner wrote:
So I'm trying to figure out how to optimize my PG install (8.0.3) to
get better performance without dropping one of my indexes.
What about something like this:
begin;
drop slow_index_name;
update;
create index
16 matches
Mail list logo